DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
's dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude......We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup...
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Convexity Adjustments for ATS Models
DEFF Research Database (Denmark)
Murgoci, Agatha; Gaspar, Raquel M.
Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes. As a re......Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes....... As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...... formulas. Concretely for LIBOR in arrears (LIA) contracts, we derive the system of Riccatti ODE-s one needs to compute to obtain the exact adjustment. Based upon the ideas of Schrager and Pelsser (2006) we are also able to derive general swap adjustments useful, in particular, when dealing with constant...
Adjustment Criterion and Algorithm in Adjustment Model with Uncertain
Directory of Open Access Journals (Sweden)
SONG Yingchun
2015-02-01
Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.
Adjustment or updating of models
Indian Academy of Sciences (India)
D J Ewins
2000-06-01
In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.
Carvalho, Francisco; Covas, Ricardo
2016-06-01
We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .
The Additive Hazard Mixing Models
Institute of Scientific and Technical Information of China (English)
Ping LI; Xiao-liang LING
2012-01-01
This paper is concerned with the aging and dependence properties in the additive hazard mixing models including some stochastic comparisons.Further,some useful bounds of reliability functions in additive hazard mixing models are obtained.
Behavioral modeling of Digitally Adjustable Current Amplifier
Josef Polak; Lukas Langhammer; Jan Jerabek
2015-01-01
This article presents the digitally adjustable current amplifier (DACA) and its analog behavioral model (ABM), which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators). Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equat...
Lengua, L J; Wolchik, S A; Sandler, I N; West, S G
2000-06-01
Investigated the interaction between parenting and temperament in predicting adjustment problems in children of divorce. The study utilized a sample of 231 mothers and children, 9 to 12 years old, who had experienced divorce within the previous 2 years. Both mothers' and children's reports on parenting, temperament, and adjustment variables were obtained and combined to create cross-reporter measures of the variables. Parenting and temperament were directly and independently related to outcomes consistent with an additive model of their effects. Significant interactions indicated that parental rejection was more strongly related to adjustment problems for children low in positive emotionality, and inconsistent discipline was more strongly related to adjustment problems for children high in impulsivity. These findings suggest that children who are high in impulsivity may be at greater risk for developing problems, whereas positive emotionality may operate as a protective factor, decreasing the risk of adjustment problems in response to negative parenting.
Behavioral modeling of Digitally Adjustable Current Amplifier
Directory of Open Access Journals (Sweden)
Josef Polak
2015-03-01
Full Text Available This article presents the digitally adjustable current amplifier (DACA and its analog behavioral model (ABM, which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators. Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equations representing specific functions in the given level of the simulation model. The design of individual levels is always verified using PSpice simulations. The ABM model has been developed based on practically measured values of a number of DACA amplifier samples. The simulation results for proposed levels of the ABM model are shown and compared with the results of the real easurements of the active element DACA.
Business models for additive manufacturing
DEFF Research Database (Denmark)
Hadar, Ronen; Bilberg, Arne; Bogers, Marcel
2015-01-01
Digital fabrication — including additive manufacturing (AM), rapid prototyping and 3D printing — has the potential to revolutionize the way in which products are produced and delivered to the customer. Therefore, it challenges companies to reinvent their business model — describing the logic...... of creating and capturing value. In this paper, we explore the implications that AM technologies have for manufacturing systems in the new business models that they enable. In particular, we consider how a consumer goods manufacturer can organize the operations of a more open business model when moving from...... a manufacturer-centric to a consumer-centric value logic. A major shift includes a move from centralized to decentralized supply chains, where consumer goods manufacturers can implement a “hybrid” approach with a focus on localization and accessibility or develop a fully personalized model where the consumer...
Adjustment of endogenous concentrations in pharmacokinetic modeling.
Bauer, Alexander; Wolfsegger, Martin J
2014-12-01
Estimating pharmacokinetic parameters in the presence of an endogenous concentration is not straightforward as cross-reactivity in the analytical methodology prevents differentiation between endogenous and dose-related exogenous concentrations. This article proposes a novel intuitive modeling approach which adequately adjusts for the endogenous concentration. Monte Carlo simulations were carried out based on a two-compartment population pharmacokinetic (PK) model fitted to real data following intravenous administration. A constant and a proportional error model were assumed. The performance of the novel model and the method of straightforward subtraction of the observed baseline concentration from post-dose concentrations were compared in terms of terminal half-life, area under the curve from 0 to infinity, and mean residence time. Mean bias in PK parameters was up to 4.5 times better with the novel model assuming a constant error model and up to 6.5 times better assuming a proportional error model. The simulation study indicates that this novel modeling approach results in less biased and more accurate PK estimates than straightforward subtraction of the observed baseline concentration and overcomes the limitations of previously published approaches.
Directory of Open Access Journals (Sweden)
Yu-Min Lu
2015-11-01
Full Text Available AIM:To observe the clinical effect of the midperipherv additional designed lenses and adjustment training on myopia in childhood.METHODS: Eighty childhood(160 eyes in allwith myopia were included in this study. All patients were divided into two groups according to the methods of correcting refractive error: the midperipherv additional designed lenses and adjustment training group(treatment group, 80 eyes of 40 casesand frame glasses group(comparison group, 80 eyes of 40 cases. The two groups had been measured myopia progress indicators and adjustment function indicators for ever 3mo. The results were compared and analyzed after 1a follow-up.RESULTS: The visual acuity, refraction, axial length had a little change after wearing lens 1a in treatment group, there was no statistically significant difference compared with wearing before(P>0.05. The visual acuity decreased, refraction and axial length increased in comparison group, the differences were statistically significant(PPPP>0.05. The difference between the two groups was statistically significant(PCONCLUSION: Midperipherv additional designed lenses and adjustment training treatment of juvenile myopia is effective, which can delay the diopters development of myopic children, improve the regulatory function, control the development of myopia, improve the adjustment function.
Application of addition-cured silicone denture relining materials to adjust mouthguards.
Fukasawa, Shintaro; Churei, Hiroshi; Chowdhury, Ruman Uddin; Shirako, Takahiro; Shahrin, Sharika; Shrestha, Abhishekhi; Wada, Takahiro; Uo, Motohiro; Takahashi, Hidekazu; Ueno, Toshiaki
2016-01-01
The purposes of this study were to examine the shock absorption capability of addition-cured silicone denture relining materials and the bonding strength of addition-cured silicone denture relining materials and a commercial mouthguard material to determine its applicability to mouthguard adjustment. Two addition-cured silicone denture relining materials and eleven commercial mouthguard materials were selected as test materials. The impact test was applied by a free-falling steel ball. On the other hand, bonding strength was determined by a delamination test. After prepared surface treatments using acrylic resin on MG sheet surface, 2 types of addition-cured silicone denture relining materials were glued to MG surface. The peak intensity, the time to peak intensity from the onset of the transmitted force and bonding strength were statistically analyzed using ANOVA and Tukey's honest significant difference post hoc test (pmaterials could be clinically applicable as a mouthguard adjustment material.
A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2000-01-01
The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.
Institute of Scientific and Technical Information of China (English)
周海强; 鞠平; 宋忠鹏; 金宇清; 孙国强
2011-01-01
A novel method of online adjustment of dynamic equivalent model was proposed in this paper. Firstly, it was pointed out that the unreasonable aggregation algorithm and constant hypothesis of the time-varying system were main sources of equivalent error. Then, online adjustment was put forward to overcome the error caused by the time-varying characteristics of the system. Parameters in the equivalent model were too many to adjust directly. And the additional fictitious impedances were introduced to overcome this difficulty. These impedances were connected to equivalent generator and equivalent motor buses. Injecting power match at boundary nodes has been achieved by adjustment of fictitious impedances with ant colony optimization (ACO) algorithm. Online adjustment can be further realized when the dynamic equivalent model is modified timely according to real time information provided by the wide area measurement system (WAMS). Finally, simulation results of the IEEE 10-generator and 39-bus test system showed that both the static and transient precisions can be enhanced largely with this method, and the robustness of the equivalent model can also be improved.%提出基于虚拟阻抗的动态等效模型在线修正方法。首先，指出等值过程中不合理的聚类算法、对时变系统的定常化假设是误差主要来源。接着，提出通过等效模型的在线修正以克服系统时变性所导致的误差。由于等效模型可调参数过多，难以对所有参数进行调整。为此，在等效发电机、等效电动机节点引入附加虚拟阻抗，应用蚁群优化算法进行调节，以实现边界点最佳功率匹配，并利用广域测量系统(wide area measurement system，WAMS)提供的实测数据对动态等效模型进行定时修正。最后，IEEE 10机39母线系统的等值计算结果表明：算法较好地改进了动态等效模型的静态精度与暂态精度，提高了模型的强壮性。
Methodological aspects of journaling a dynamic adjusting entry model
Directory of Open Access Journals (Sweden)
Vlasta Kašparovská
2011-01-01
Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.
Effect of Flux Adjustments on Temperature Variability in Climate Models
Energy Technology Data Exchange (ETDEWEB)
Duffy, P.; Bell, J.; Covey, C.; Sloan, L.
1999-12-27
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.
Brischetto, Salvatore; Ciano, Alessandro; Ferro, Carlo Giovanni
2016-07-01
The present paper shows an innovative multirotor Unmanned Aerial Vehicle (UAV) which is able to easily and quickly change its configuration. In order to satisfy this feature, the principal structure is made of an universal plate, combined with a circular ring, to create a rail guide able to host the arms, in a variable number from 3 to 8, and the legs. The arms are adjustable and contain all the avionic and motor drivers to connect the main structure with each electric motor. The unique arm design, defined as all-in-one, allows classical single rotor configurations, double rotor configurations and amphibious configurations including inflatable elements positioned at the bottom of the arms. The proposed multi-rotor system is inexpensive because of the few universal pieces needed to compose the platform which allows the creation of a kit. This modular kit allows to have a modular drone with different configurations. Such configurations are distinguished among them for the number of arms, number of legs, number of rotors and motors, and landing capability. Another innovation feature is the introduction of the 3D printing technology to produce all the structural elements. In this manner, all the pieces are designed to be produced via the Fused Deposition Modelling (FDM) technology using desktop 3D printers. Therefore, an universal, dynamic and economic multi-rotor UAV has been developed.
Bayes linear covariance matrix adjustment for multivariate dynamic linear models
Wilkinson, Darren J
2008-01-01
A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.
Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)
The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...
R.M. Solow Adjusted Model of Economic Growth
Directory of Open Access Journals (Sweden)
Ion Gh. Rosca
2007-05-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Fitting Additive Binomial Regression Models with the R Package blm
Directory of Open Access Journals (Sweden)
Stephanie Kovalchik
2013-09-01
Full Text Available The R package blm provides functions for fitting a family of additive regression models to binary data. The included models are the binomial linear model, in which all covariates have additive effects, and the linear-expit (lexpit model, which allows some covariates to have additive effects and other covariates to have logisitc effects. Additive binomial regression is a model of event probability, and the coefficients of linear terms estimate covariate-adjusted risk differences. Thus, in contrast to logistic regression, additive binomial regression puts focus on absolute risk and risk differences. In this paper, we give an overview of the methodology we have developed to fit the binomial linear and lexpit models to binary outcomes from cohort and population-based case-control studies. We illustrate the blm packages methods for additive model estimation, diagnostics, and inference with risk association analyses of a bladder cancer nested case-control study in the NIH-AARP Diet and Health Study.
Meyer, Andrew J; Patten, Carolynn; Fregly, Benjamin J
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient's lower extremity muscle excitations contribute to the patient's lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient's musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with
2010-04-01
... Withdrawals in the Computation of Rate of Return B Appendix B to Part 4 Commodity and Securities Exchanges... Appendix B to Part 4—Adjustments for Additions and Withdrawals in the Computation of Rate of Return This... trading advisors may calculate the rate of return information required by Rules 4.25(a)(7)(i)(F) and...
A price adjustment process in a model of monopolistic competition
J. Tuinstra
2004-01-01
We consider a price adjustment process in a model of monopolistic competition. Firms have incomplete information about the demand structure. When they set a price they observe the amount they can sell at that price and they observe the slope of the true demand curve at that price. With this informat
DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS
Miodrag Manić; Zoran Stamenković; Milorad Mitković; Miloš Stojković; Duncan E.T. Shephard
2015-01-01
Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. Wit...
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Identifying confounders using additive noise models
Janzing, Dominik; Mooij, Joris; Schoelkopf, Bernhard
2012-01-01
We propose a method for inferring the existence of a latent common cause ('confounder') of two observed random variables. The method assumes that the two effects of the confounder are (possibly nonlinear) functions of the confounder plus independent, additive noise. We discuss under which conditions the model is identifiable (up to an arbitrary reparameterization of the confounder) from the joint distribution of the effects. We state and prove a theoretical result that provides evidence for the conjecture that the model is generically identifiable under suitable technical conditions. In addition, we propose a practical method to estimate the confounder from a finite i.i.d. sample of the effects and illustrate that the method works well on both simulated and real-world data.
A simple approach to adjust tidal forcing in fjord models
Hjelmervik, Karina; Kristensen, Nils Melsom; Staalstrøm, André; Røed, Lars Petter
2017-07-01
To model currents in a fjord accurate tidal forcing is of extreme importance. Due to complex topography with narrow and shallow straits, the tides in the innermost parts of a fjord are both shifted in phase and altered in amplitude compared to the tides in the open water outside the fjord. Commonly, coastal tide information extracted from global or regional models is used on the boundary of the fjord model. Since tides vary over short distances in shallower waters close to the coast, the global and regional tidal forcings are usually too coarse to achieve sufficiently accurate tides in fjords. We present a straightforward method to remedy this problem by simply adjusting the tides to fit the observed tides at the entrance of the fjord. To evaluate the method, we present results from the Oslofjord, Norway. A model for the fjord is first run using raw tidal forcing on its open boundary. By comparing modelled and observed time series of water level at a tidal gauge station close to the open boundary of the model, a factor for the amplitude and a shift in phase are computed. The amplitude factor and the phase shift are then applied to produce adjusted tidal forcing at the open boundary. Next, we rerun the fjord model using the adjusted tidal forcing. The results from the two runs are then compared to independent observations inside the fjord in terms of amplitude and phases of the various tidal components, the total tidal water level, and the depth integrated tidal currents. The results show improvements in the modelled tides in both the outer, and more importantly, the inner parts of the fjord.
Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model
Patricia L. Andrews
2012-01-01
Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...
A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment
Lamborn, Susie D.; Groh, Kelly
2009-01-01
We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…
Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate
2005-05-01
The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
Directory of Open Access Journals (Sweden)
Jing Guo
2016-10-01
Full Text Available Wire arc additive manufacturing (WAAM offers a potential approach to fabricate large-scale magnesium alloy components with low cost and high efficiency, although this topic is yet to be reported in literature. In this study, WAAM is preliminarily applied to fabricate AZ31 magnesium. Fully dense AZ31 magnesium alloy components are successfully obtained. Meanwhile, to refine grains and obtain good mechanical properties, the effects of pulse frequency (1, 2, 5, 10, 100, and 500 Hz on the macrostructure, microstructure and tensile properties are investigated. The results indicate that pulse frequency can result in the change of weld pool oscillations and cooling rate. This further leads to the change of the grain size, grain shape, as well as the tensile properties. Meanwhile, due to the resonance of the weld pool at 5 Hz and 10 Hz, the samples have poor geometry accuracy but contain finer equiaxed grains (21 μm and exhibit higher ultimate tensile strength (260 MPa and yield strength (102 MPa, which are similar to those of the forged AZ31 alloy. Moreover, the elongation of all samples is above 23%.
Computational Process Modeling for Additive Manufacturing (OSU)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R
2014-11-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.
Attar-Schwartz, Shalhevet
2015-09-01
Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes.
CREATION OF THE MODEL ADDITIONAL PROTOCOL
Energy Technology Data Exchange (ETDEWEB)
Houck, F.; Rosenthal, M.; Wulf, N.
2010-05-25
In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.
Constructing stochastic models from deterministic process equations by propensity adjustment
Directory of Open Access Journals (Sweden)
Wu Jialiang
2011-11-01
Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic
Model for Adjustment of Aggregate Forecasts using Fuzzy Logic
Directory of Open Access Journals (Sweden)
Taracena–Sanz L. F.
2010-07-01
Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.
DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS
Directory of Open Access Journals (Sweden)
Miodrag Manić
2015-12-01
Full Text Available Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. With this method it is possible to design volumetric implants used for replacing a part of the bone or a plate type for fixation of a bone part. The sides of the implants, this one lying on the bone, are fully aligned with the anatomical shape of the bone surface which neighbors the fracture. The given model is designed for implants production utilizing any method, and it is ideal for 3D printing of implants.
PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model
Directory of Open Access Journals (Sweden)
Wasi Riyanto
2013-07-01
Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.
Designing a model to improve first year student adjustment to university
Directory of Open Access Journals (Sweden)
Nasrin Nikfal Azar
2014-05-01
Full Text Available The increase in the number of universities for the last decade in Iran increases the need for higher education institutions to manage their enrollment, more effectively. The purpose of this study is to design a model to improve the first year university student adjustment by examining the effects of academic self-efficacy, academic motivation, satisfaction, high school GPA and demographic variables on student’s adjustment to university. The study selects a sample of 357 students out of 4585 bachelor first year student who were enrolled in different programs. Three questionnaires were used for collection of data for this study, namely academic self-efficacy, academic motivation and student satisfaction with university. Structural equation modeling was employed using AMOS version7.16 to test the adequacy of the hypothesized model. Inclusion of additional relationship in the initial model improved the goodness indices considerably. The results suggest that academic self-efficacy were related positively to adjustment, both directly (B=0.35 and indirectly through student satisfaction (B=0.14 and academic motivation (B=0.9. The results indicate a need to develop programs that effectively promote the self-efficacy of first year student of student to increase college adjustment and consequently retention rate.
Systematic review of risk adjustment models of hospital length of stay (LOS).
Lu, Mingshan; Sajobi, Tolulope; Lucyk, Kelsey; Lorenzetti, Diane; Quan, Hude
2015-04-01
Policy decisions in health care, such as hospital performance evaluation and performance-based budgeting, require an accurate prediction of hospital length of stay (LOS). This paper provides a systematic review of risk adjustment models for hospital LOS, and focuses primarily on studies that use administrative data. MEDLINE, EMBASE, Cochrane, PubMed, and EconLit were searched for studies that tested the performance of risk adjustment models in predicting hospital LOS. We included studies that tested models developed for the general inpatient population, and excluded those that analyzed risk factors only correlated with LOS, impact analyses, or those that used disease-specific scales and indexes to predict LOS. Our search yielded 3973 abstracts, of which 37 were included. These studies used various disease groupers and severity/morbidity indexes to predict LOS. Few models were developed specifically for explaining hospital LOS; most focused primarily on explaining resource spending and the costs associated with hospital LOS, and applied these models to hospital LOS. We found a large variation in predictive power across different LOS predictive models. The best model performance for most studies fell in the range of 0.30-0.60, approximately. The current risk adjustment methodologies for predicting LOS are still limited in terms of models, predictors, and predictive power. One possible approach to improving the performance of LOS risk adjustment models is to include more disease-specific variables, such as disease-specific or condition-specific measures, and functional measures. For this approach, however, more comprehensive and standardized data are urgently needed. In addition, statistical methods and evaluation tools more appropriate to LOS should be tested and adopted.
Energy Technology Data Exchange (ETDEWEB)
Stridsberg, Sven [BIOSYD (Sweden)
1999-10-01
The ground of the project is a development work, carried out by BIOSYD according combustion of straw in heating plants. First we have handled combustion experiments with addition of straw in some plants working with wood fuels, mainly with good results. In the next step we have worked with new techniques for handling and delivery of straw to the plants, also including experiments with chopping of the straw on the field, storing it in outdoor uncovered piles and consequently delivered in the shape of 'chips' to the heating plant. The whole cycle from cutting to combustion has been checked. The results indicate a possible price of the straw at the heating plant of approx 85 SEK/MWh, which can easily compete with wood fuels. The present project will describe which adjustments of the machine equipment are needed to allow a 25 % addition of straw in the fuel mix, how much these adjustments will cost and if they should be profitable in competition with wood fuels for 110 SEK/MWh. In total 37 heating plants from Skaane up to Uppland have been visited and the process from fuel reception to combustion analyzed. The costs of adjustments needed have been calculated from similar examples. The main impression from the studies is that the fuel reception has too small volumes to allow more numerous kinds of fuel and specially make it possible to give a good mix. This is often not critical for wood fuels but for straw we must guarantee a good mix to get a good combustion. Other critical points are crossings between conveyors, for example dips and feeding out devices, which often have to be adjusted. In the combustion there is a risk for sintering as well as coatings on tubes and walls. These functions must be avoided by air distribution, feed back of fuel gas and better carbon removing. In our analyses we would have judged on results from practical tests, but as this would have been too extensive, we must trust in former experiences, transferred to respective plants. Our
Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model
Directory of Open Access Journals (Sweden)
Wasi Riyanto
2013-07-01
Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population
Model averaging for semiparametric additive partial linear models
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.
Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina
Peek, Lori; Morrissey, Bridget; Marlatt, Holly
2011-01-01
The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…
A DNA based model for addition computation
Institute of Scientific and Technical Information of China (English)
GAO Lin; YANG Xiao; LIU Wenbin; XU Jin
2004-01-01
Much effort has been made to solve computing problems by using DNA-an organic simulating method, which in some cases is preferable to the current electronic computer. However, No one at present has proposed an effective and applicable method to solve addition problem with molecular algorithm due to the difficulty in solving the carry problem which can be easily solved by hardware of an electronic computer. In this article, we solved this problem by employing two kinds of DNA strings, one is called result and operation string while the other is named carrier. The result and operation string contains some carry information by its own and denotes the ultimate result while the carrier is just for carrying use. The significance of this algorithm is the original code, the fairly easy steps to follow and the feasibility under current molecular biological technology.
Setting of Agricultural Insurance Premium Rate and the Adjustment Model
Institute of Scientific and Technical Information of China (English)
HUANG Ya-lin
2012-01-01
First,using the law of large numbers,I analyze the setting principle of agricultural insurance premium rate,and take the case of setting of adult sow premium rate for study,to draw the conclusion that with the continuous promotion of agricultural insurance,increase in the types of agricultural insurance and increase in the number of the insured,the premium rate should also be adjusted opportunely.Then,on the basis of Bayes’ theorem,I adjust and calibrate the claim frequency and the average claim,in order to correctly adjust agricultural insurance premium rate;take the case of forest insurance for premium rate adjustment analysis.In setting and adjustment of agricultural insurance premium rate,in order to make the expected results well close to the real results,it is necessary to apply the probability estimates in a large number of risk units;focus on the establishment of agricultural risk database,to timely adjust agricultural insurance premium rate.
Adjustable box-wing model for solar radiation pressure impacting GPS satellites
Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.
2012-04-01
One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit
Processing Approach of Non-linear Adjustment Models in the Space of Non-linear Models
Institute of Scientific and Technical Information of China (English)
LI Chaokui; ZHU Qing; SONG Chengfang
2003-01-01
This paper investigates the mathematic features of non-linear models and discusses the processing way of non-linear factors which contributes to the non-linearity of a nonlinear model. On the basis of the error definition, this paper puts forward a new adjustment criterion, SGPE.Last, this paper investigates the solution of a non-linear regression model in the non-linear model space and makes the comparison between the estimated values in non-linear model space and those in linear model space.
Hyperbolic value addition and general models of animal choice.
Mazur, J E
2001-01-01
Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.
Directory of Open Access Journals (Sweden)
Wararit PANICHKITKOSOLKUL
2012-09-01
Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and for almost all situations.
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
2015-11-01
exchange parameter differs by only 3% from the baseline value and has little impact (- 0.1 %) on the cost function. The particulate inorganic to organic carbon ratio was increased more than threefold and reduced the cost function by 22% relative to the baseline integration, indicating a significant influence of biology on air-sea gas exchange. The largest contribution to cost reduction (35%) comes from the adjustment of initial conditions. In addition to reducing biases relative to observations, the adjusted simulation exhibits smaller model drift than the baseline. We estimate drift by integrating the model with repeated 2009 atmospheric forcing for seven years and find a volume-weighted drift reduction of, for example, 12.5% for nitrate and 30% for oxygen in the top 300 m. Although there remain several regions with large model-data discrepancies, for example, overly strong carbon uptake in the Southern Ocean, the adjusted simulation is a first step towards a more accurate representation of the ocean carbon cycle at high spatial and temporal resolution.
Coordinate descent methods for the penalized semiparametric additive hazards model
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...
Coordinate descent methods for the penalized semiprarametric additive hazard model
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
2012-01-01
. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs
Institute of Scientific and Technical Information of China (English)
RAO Lan-lan; CAI Dong-han
2004-01-01
We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.
A New Method for Identifying the Model Error of Adjustment System
Institute of Scientific and Technical Information of China (English)
TAO Benzao; ZHANG Chaoyu
2005-01-01
Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.
Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.
Multi-Period Model of Portfolio Investment and Adjustment Based on Hybrid Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
RONG Ximin; LU Meiping; DENG Lin
2009-01-01
This paper proposes a multi-period portfolio investment model with class constraints, transaction cost, and indivisible securities. When an investor joins the securities market for the first time, he should decide on portfolio investment based on the practical conditions of securities market. In addition, investors should adjust the portfolio according to market changes, changing or not changing the category of risky securities. Markowitz mean-variance approach is applied to the multi-period portfolio selection problems. Because the sub-models are optimal mixed integer program, whose objective function is not unimodal and feasible set is with a particular structure, traditional optimization method usually fails to find a globally optimal solution. So this paper employs the hybrid genetic algorithm to solve the problem. Investment policies that accord with finance market and are easy to operate for investors are put forward with an illustration of application.
A Comparative Study of CAPM and Seven Factors Risk Adjusted Return Model
Directory of Open Access Journals (Sweden)
Madiha Riaz Bhatti
2014-12-01
Full Text Available This study is a comparison and contrast of the predictive powers of two asset pricing models: CAPM and seven factor risk-return adjusted model, to explain the cross section of stock rate of returns in the financial sector listed at Karachi Stock Exchange (KSE. To test the models daily returns from January 2013 to February 2014 have been taken and the excess returns of portfolios are regressed on explanatory variables. The results of the tested models indicate that the models are valid and applicable in the financial market of Pakistan during the period under study, as the intercepts are not significantly different from zero. It is consequently established from the findings that all the explanatory variables explain the stock returns in the financial sector of KSE. In addition, the results of this study show that addition of more explanatory variables to the single factor CAPM results in reasonably high values of R2. These results provide substantial support to fund managers, investors and financial analysts in making investment decisions.
ASPECTS OF DESIGN PROCESS AND CAD MODELLING OF AN ADJUSTABLE CENTRIFUGAL COUPLING
Directory of Open Access Journals (Sweden)
Adrian BUDALĂ
2015-05-01
Full Text Available The paper deals with constructive and functional elements of an adjustable coupling with friction shoes and adjustable driving. Also, the paper shows few stages of the design process, some advantages of the using CAD software and some comparative results prototype vs. CAD model.
Holahan, Charles J.; And Others
1995-01-01
An integrative predictive model was applied to responses of 241 college freshmen to examine interrelationships among parental support, adaptive coping strategies, and psychological adjustment. Social support from both parents and a nonconflictual parental relationship were positively associated with adolescents' psychological adjustment. (SLD)
Modeling of an Adjustable Beam Solid State Light Project
Clark, Toni
2015-01-01
This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.
The relationship of values to adjustment in illness: a model for nursing practice.
Harvey, R M
1992-04-01
This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.
Mixed continuous/discrete time modelling with exact time adjustments
Rovers, K.C.; Kuper, Jan; van de Burgwal, M.D.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria
2011-01-01
Many systems interact with their physical environment. Design of such systems need a modelling and simulation tool which can deal with both the continuous and discrete aspects. However, most current tools are not adequately able to do so, as they implement both continuous and discrete time signals
R.M. Solow Adjusted Model of Economic Growth
Directory of Open Access Journals (Sweden)
Ion Gh. Rosca
2007-05-01
The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Salwen, Jessica K; O'Leary, K Daniel
2013-07-01
Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.
Coordinate descent methods for the penalized semiprarametric additive hazard model
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
2012-01-01
For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity....... The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...
Energy Technology Data Exchange (ETDEWEB)
Poirier, M. R.; Stallings, M. E.; Burket, P.R.; Fink, S. D.
2005-11-30
The Site Deactivation and Decommissioning (SDD) Organization is evaluating options to disposition the 800 underground tanks (including removal of the sludge heels from these tanks). To support this effort, SDD requested assistance from Savannah River National Laboratory (SRNL) personnel to examine the composition and flow characteristics of the Tank 804 sludge slurry after diluting it 10:1 with water, adding manganese nitrate to produce a slurry containing 5.5 wt % manganese (40:1 ratio of Mn:Pu), and adding sufficient 8 M caustic to raise the pH to 7, 10, and 14. Researchers prepared slurries containing one part Tank 804 sludge and 10 parts water. The water contained 5.5 wt % manganese (which SDD will add to poison the plutonium in Tank 804) and was pH adjusted to 3, 7, 10, or 14. They hand mixed (i.e., shook) these slurries and allowed them to sit overnight. With the pH 3, 7, and 10 slurries, much of the sludge remained stuck to the container wall. With the pH 14 slurry, most of the sludge appeared to be suspended in the slurry. They collected samples from the top and bottom of each container, which were analyzed for plutonium, manganese, and organic constituents. Following sampling, they placed the remaining material into a viscometer and measured the relationship between applied shear stress and shear rate. The pH 14 slurry was placed in a spiral ''race track'' apparatus and allowed to gravity drain.
An Additive-Multiplicative Restricted Mean Residual Life Model
DEFF Research Database (Denmark)
Mansourvar, Zahra; Martinussen, Torben; Scheike, Thomas H.
2016-01-01
mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline....... For the suggested model, some covariate effects are allowed to be time-varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness...... of fit test that is asymptotically justified. The proposed methodology is evaluated via simulation studies and further applied to a kidney cancer data set collected from a clinical trial....
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes
2012-11-01
Rankine Vortex (RV) model [25], the SLOSH model [28], the Holland model [29], the vortex simulation model [30], and the Willoughby and Rahn model [31...www.asme.org/terms/Terms_Use.cfm where Pn ¼ Pc 20:69 þ 1:33Vm þ 0:11u (3) Willoughby et al. [34] provide an alternative formula to estimate Rm as a function of...MacAfee and Pearson [26], and Willoughby et al. [34] also made adjustments which were tailored for mid- latitude applications. 3 Adjustments to the RV
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for
Additive Intensity Regression Models in Corporate Default Analysis
DEFF Research Database (Denmark)
Lando, David; Medhat, Mamdouh; Nielsen, Mads Stenbo
2013-01-01
We consider additive intensity (Aalen) models as an alternative to the multiplicative intensity (Cox) models for analyzing the default risk of a sample of rated, nonfinancial U.S. firms. The setting allows for estimating and testing the significance of time-varying effects. We use a variety of mo...
On the compensation between cloud feedback and cloud adjustment in climate models
Chung, Eui-Seok; Soden, Brian J.
2017-04-01
Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.
论生产函数调整模型%Study on adjustable production function model
Institute of Scientific and Technical Information of China (English)
葛新权
2003-01-01
Cobb-Douglas production function is a nonlinear model which is most frequently used and can beenchanged into linear model. There is no doubt for the reasonability of this logarithm linearization. Thispaper gives an new proposition on the basis of deeper analysis that there is a defect with this linearization,hence adjustable production function model is proposed to eliminate it.
DEFF Research Database (Denmark)
Cichon, Bernardette; Ritz, Christian; Fabiansen, Christian
2017-01-01
measured in serum. Generalized additive, quadratic, and linear models were used to model the relation between SF and sTfR as outcomes and CRP and AGP as categorical variables (model 1; equivalent to the CF approach), CRP and AGP as continuous variables (model 2), or CRP and AGP as continuous variables......: Crossvalidation revealed no advantage to using generalized additive or quadratic models over linear models in terms of the RMSE. Linear model 3 performed better than models 2 and 1. Furthermore, we found no difference in CFs for adjusting SF and those from a previous meta-analysis. Adjustment of SF and s...... of inflammation into account. In clinical settings, the CF approach may be more practical. There is no benefit from adjusting sTfR. This trial was registered at www.controlled-trials.com as ISRCTN42569496....
Directory of Open Access Journals (Sweden)
Michele S Youngleson
Full Text Available BACKGROUND: Health systems that deliver prevention of mother to child transmission (PMTCT services in low and middle income countries continue to underperform, resulting in thousands of unnecessary HIV infections of newborns each year. We used a combination of approaches to health systems strengthening to reduce transmission of HIV from mother to infant in a multi-facility public health system in South Africa. METHODOLOGY/PRINCIPAL FINDINGS: All primary care sites and specialized birthing centers in a resource constrained sub-district of Cape Metro District, South Africa, were enrolled in a quality improvement (QI programme. All pregnant women receiving antenatal, intrapartum and postnatal infant care in the sub-district between January 2006 and March 2009 were included in the intervention that had a prototype-innovation phase and a rapid spread phase. System changes were introduced to help frontline healthcare workers to identify and improve performance gaps at each step of the PMTCT pathway. Improvement was facilitated and spread through the use of a Breakthrough Series Collaborative that accelerated learning and the spread of successful changes. Protocol changes and additional resources were introduced by provincial and municipal government. The proportion of HIV-exposed infants testing positive declined from 7.6% to 5%. Key intermediate PMTCT processes improved (antenatal AZT increased from 74% to 86%, PMTCT clients on HAART at the time of labour increased from 10% to 25%, intrapartum AZT increased from 43% to 84%, and postnatal HIV testing from 79% to 95% compared to baseline. CONCLUSIONS/SIGNIFICANCE: System improvement methods, protocol changes and addition/reallocation of resources contributed to improved PMTCT processes and outcomes in a resource constrained setting. The intervention requires a clear design, leadership buy-in, building local capacity to use systems improvement methods, and a reliable data system. A systems improvement
Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.
Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao
2016-10-01
With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.
Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel
Directory of Open Access Journals (Sweden)
Lili Tian
2016-10-01
Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.
Steps in the construction and verification of an explanatory model of psychosocial adjustment
Directory of Open Access Journals (Sweden)
Arantzazu Rodríguez-Fernández
2016-06-01
Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research
Contact Angle Adjustment in Equation of States Based Pseudo-Potential Model
Hu, Anjie; Uddin, Rizwan
2015-01-01
Single component pseudo-potential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many research, it has been claimed that this model can be stable for density ratios larger than 1000, however, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in present work show that, by applying the contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with the new...
Energy Technology Data Exchange (ETDEWEB)
Moreno y Moreno, A. [Departamento de Apoyo en Ciencias Aplicadas, Benemerita Universidad Autonoma de Puebla, 4 Sur 104, Centro Historico, 72000 Puebla (Mexico); Moreno B, A. [Facultad de Ciencias Quimicas, UNAM, 04510 Mexico D.F. (Mexico)
2002-07-01
This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a{sub i}* exp (-1/b{sub i} * (T-C{sub i})) where: a{sub i}, b{sub i}, c{sub i} are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)
Milly, P.C.D.; Dunne, K.A.
2011-01-01
Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.
Using Set Model for Learning Addition of Integers
Directory of Open Access Journals (Sweden)
Umi Puji Lestari
2015-07-01
Full Text Available This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the use of set models that is packaged in activity of recording of financial transactions in two color chips and card game can help students to understand the concept of zero pair, addition with the same colored chips, and cancellation strategy.
Process chain modeling and selection in an additive manufacturing context
DEFF Research Database (Denmark)
Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael
2016-01-01
can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...
Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes
Hehr, Adam; Dapino, Marcelo J.
2016-04-01
Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.
Praskievicz, Sarah; Bartlein, Patrick
2014-09-01
An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.
Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure
Vandsburger, Etty; Biggerstaff, Marilyn A.
2004-01-01
This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…
A Model of Divorce Adjustment for Use in Family Service Agencies.
Faust, Ruth Griffith
1987-01-01
Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…
Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States
Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.
2007-01-01
Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…
Grbac, Zorana; Scherer, Matthias; Zagst, Rudi
2016-01-01
This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...
Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria
Directory of Open Access Journals (Sweden)
Kimmel Marek
2011-05-01
Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.
Single-Index Additive Vector Autoregressive Time Series Models
LI, YEHUA
2009-09-01
We study a new class of nonlinear autoregressive models for vector time series, where the current vector depends on single-indexes defined on the past lags and the effects of different lags have an additive form. A sufficient condition is provided for stationarity of such models. We also study estimation of the proposed model using P-splines, hypothesis testing, asymptotics, selection of the order of the autoregression and of the smoothing parameters and nonlinear forecasting. We perform simulation experiments to evaluate our model in various settings. We illustrate our methodology on a climate data set and show that our model provides more accurate yearly forecasts of the El Niño phenomenon, the unusual warming of water in the Pacific Ocean. © 2009 Board of the Foundation of the Scandinavian Journal of Statistics.
Validation of transport models using additive flux minimization technique
Energy Technology Data Exchange (ETDEWEB)
Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)
2013-10-15
A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv
Directory of Open Access Journals (Sweden)
Simon N. Wood
2016-12-01
Full Text Available The BUGS language offers a very flexible way of specifying complex statistical models for the purposes of Gibbs sampling, while its JAGS variant offers very convenient R integration via the rjags package. However, including smoothers in JAGS models can involve some quite tedious coding, especially for multivariate or adaptive smoothers. Further, if an additive smooth structure is required then some care is needed, in order to centre smooths appropriately, and to find appropriate starting values. R package mgcv implements a wide range of smoothers, all in a manner appropriate for inclusion in JAGS code, and automates centring and other smooth setup tasks. The purpose of this note is to describe an interface between mgcv and JAGS, based around an R function, jagam, which takes a generalized additive model (GAM as specified in mgcv and automatically generates the JAGS model code and data required for inference about the model via Gibbs sampling. Although the auto-generated JAGS code can be run as is, the expectation is that the user would wish to modify it in order to add complex stochastic model components readily specified in JAGS. A simple interface is also provided for visualisation and further inference about the estimated smooth components using standard mgcv functionality. The methods described here will be un-necessarily inefficient if all that is required is fully Bayesian inference about a standard GAM, rather than the full flexibility of JAGS. In that case the BayesX package would be more efficient.
Modeling the influence of limestone addition on cement hydration
Directory of Open Access Journals (Sweden)
Ashraf Ragab Mohamed
2015-03-01
Full Text Available This paper addresses the influence of using Portland limestone cement “PLC” on cement hydration by characterization of its microstructure development. The European Standard EN 197-1:2011 and Egyptian specification ESS 4756-1/2009 permit the cement to contain up to 20% ground limestone. The computational tools assist in better understanding the influence of limestone additions on cement hydration and microstructure development to facilitate the acceptance of these more economical and ecological materials. μic model has been developed to enable the modeling of microstructural evolution of cementitious materials. In this research μic model is used to simulate both the influence of limestone as fine filler, providing additional surfaces for the nucleation and growth of hydration products. Limestone powder also reacts relatively slow with hydrating cement to form monocarboaluminate (AFmc phase, similar to the mono-sulfoaluminate (AFm phase formed in ordinary Portland cement. The model results reveal that limestone cement has accelerated cement hydration rate, previous experimental results and computer model “cemhyd3d” are used to validate this model.
Qiao, Jun; Zhang, Jiahua; Zhang, Xia; Hao, Zhendong; Liu, Yongfu; Pan, Guohui
2014-05-01
In this Letter, we report the addition of Pr3+ and Mg2+ in CSS:Ce3+, Mn2+ phosphor for improving the performances of white light-emitting diodes (LEDs). The additional trivalent Pr3+ will occupy the Ca2+ site in this host like the situation of Ce3+, its concentration can be enhanced by the addition of Mg2+ in Sc3+ site due to the substitution of Mg2+ for Sc3+ can compensate the charge mismatch between Pr3+ and Ca2+. Based on the efficient Ce3+→Pr3+ and Mn2+→Pr3+ energy transfers (ETs) and the compensation effect of Mg2+, the additional Pr3+ in our present phosphors exhibits an intense red-emission around 610 nm, which is significant for enhancing the color rendering property. In addition, we also find that the additional Mg2+ in Sc3+ site can markedly adjust the photoluminescence (PL) spectrum shape of our phosphor by controlling the distribution of Mn2+ at Ca2+ and Sc3+ sites. A new tunable full-color emission is obtained via the ETs (Ce3+→Mn2+, Ce3+→Pr3+ and Mn2+→Pr3+) and the adjusting effect of Mg2+ in our present phosphors. Finally, a white LED with higher color rendering index of 90, lower correlated color temperature of 4980 K, and chromaticity coordinates of (0.34, 0.31) was obtained by combining the single CSS:0.08Ce3+, 0.01Pr3+, 0.3Mn2+, 0.2Mg2+ phosphor with a blue-emitting InGaN LED chip.
Cassidy, Adam R
2016-01-01
The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors.
Predicting the Probability of Lightning Occurrence with Generalized Additive Models
Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim
2017-04-01
This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.
Directory of Open Access Journals (Sweden)
H. Lee
2012-01-01
Full Text Available State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA and kinematic-wave routing models of the US National Weather Service (NWS Research Distributed Hydrologic Model (RDHM with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation
Genomic breeding value estimation using nonparametric additive regression models
Directory of Open Access Journals (Sweden)
Solberg Trygve
2009-01-01
Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.
High-dimensional additive hazard models and the Lasso
Gaïffas, Séphane
2011-01-01
We consider a general high-dimensional additive hazard model in a non-asymptotic setting, including regression for censored-data. In this context, we consider a Lasso estimator with a fully data-driven $\\ell_1$ penalization, which is tuned for the estimation problem at hand. We prove sharp oracle inequalities for this estimator. Our analysis involves a new "data-driven" Bernstein's inequality, that is of independent interest, where the predictable variation is replaced by the optional variation.
Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L
2016-07-01
The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.
Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia
Van der Wal, W.; Barnhoorn, A.; Stocchi, P.; Gradmann, S.; Wu, P.; Drury, M.; Vermeersen, L.L.A.
2013-01-01
Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes
A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
2012-01-01
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.
Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.
Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…
Semiparametric Additive Transformation Model under Current Status Data
Cheng, Guang
2011-01-01
We consider the efficient estimation of the semiparametric additive transformation model with current status data. A wide range of survival models and econometric models can be incorporated into this general transformation framework. We apply the B-spline approach to simultaneously estimate the linear regression vector, the nondecreasing transformation function, and a set of nonparametric regression functions. We show that the parametric estimate is semiparametric efficient in the presence of multiple nonparametric nuisance functions. An explicit consistent B-spline estimate of the asymptotic variance is also provided. All nonparametric estimates are smooth, and shown to be uniformly consistent and have faster than cubic rate of convergence. Interestingly, we observe the convergence rate interfere phenomenon, i.e., the convergence rates of B-spline estimators are all slowed down to equal the slowest one. The constrained optimization is not required in our implementation. Numerical results are used to illustra...
Thermal modelling of extrusion based additive manufacturing of composite materials
DEFF Research Database (Denmark)
Jensen, Mathias Laustsen; Sonne, Mads Rostgaard; Hattel, Jesper Henri
2017-01-01
One of the hottest topics regarding manufacturing these years is additive manufacturing (AM). AM is a young branch of manufacturing techniques, which by nature is disruptive due to its completely different manufacturing approach, wherein material is added instead of removed. By adding material...... process knowledge, and validating the generated toolpaths before the real manufacturing process takes place: Hence removing time consuming and expensive trial-and-error processes for new products. This study applies a 2D restricted finite volume model aimed to describe thermoplastic Acrylonitrille......-butadiene-styrene (ABS) and thermosetting polyurethane (PU) material extrusion processes. During the experimental evaluation of the produced models it is found that some critical material properties needs to be further investigated to increase the precision of the model. It is however also found that even with only...
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
Testing exclusion restrictions and additive separability in sample selection models
DEFF Research Database (Denmark)
Huber, Martin; Mellace, Giovanni
2014-01-01
Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
JIANG JianCheng; LI JianTao
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.
Crosilla, Fabio; Beinat, Alberto
The paper reviews at first some aspects of the generalised Procrustes analysis (GP) and outlines the analogies with the block adjustment by independent models. On this basis, an innovative solution of the block adjustment problem by Procrustes algorithms and the related computer program implementation are presented and discussed. The main advantage of the new proposed method is that it avoids the conventional least squares solution. For this reason, linearisation algorithms and the knowledge of a priori approximate values for the unknown parameters are not required. Once the model coordinates of the tie points are available and at least three control points are known, the Procrustes algorithms can directly provide, without further information, the tie point ground coordinates and the exterior orientation parameters. Furthermore, some numerical block adjustment solutions obtained by the new method in different areas of North Italy are compared to the conventional solution. The very simple data input process, the less memory requirements, the low computing time and the same level of accuracy that characterise the new algorithm with respect to a conventional one are verified with these tests. A block adjustment of 11 models, with 44 tie points and 14 control points, takes just a few seconds on an Intel PIII 400 MHz computer, and the total data memory required is less than twice the allocated space for the input data. This is because most of the computations are carried out on data matrices of limited size, typically 3×3.
Cotton, Fabrice; Scherbaum, Frank; Bommer, Julian J.; Bungum, Hilmar
2006-04-01
A vital component of any seismic hazard analysis is a model for predicting the expected distribution of ground motions at a site due to possible earthquake scenarios. The limited nature of the datasets from which such models are derived gives rise to epistemic uncertainty in both the median estimates and the associated aleatory variability of these predictive equations. In order to capture this epistemic uncertainty in a seismic hazard analysis, more than one ground-motion prediction equation must be used, and the tool that is currently employed to combine multiple models is the logic tree. Candidate ground-motion models for a logic tree should be selected in order to obtain the smallest possible suite of equations that can capture the expected range of possible ground motions in the target region. This is achieved by starting from a comprehensive list of available equations and then applying criteria for rejecting those considered inappropriate in terms of quality, derivation or applicability. Once the final list of candidate models is established, adjustments must be applied to achieve parameter compatibility. Additional adjustments can also be applied to remove the effect of systematic differences between host and target regions. These procedures are applied to select and adjust ground-motion models for the analysis of seismic hazard at rock sites in West Central Europe. This region is chosen for illustrative purposes particularly because it highlights the issue of using ground-motion models derived from small magnitude earthquakes in the analysis of hazard due to much larger events. Some of the pitfalls of extrapolating ground-motion models from small to large magnitude earthquakes in low seismicity regions are discussed for the selected target region.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Additive Manufacturing of Medical Models--Applications in Rhinology.
Raos, Pero; Klapan, Ivica; Galeta, Tomislav
2015-09-01
In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.
Malliavin's calculus in insider models: Additional utility and free lunches
2002-01-01
We consider simple models of financial markets with regular traders and insiders possessing some extra information hidden in a random variable which is accessible to the regular trader only at the end of the trading interval. The problems we focus on are the calculation of the additional utility of the insider and a study of his free lunch possibilities. The information drift, i.e. the drift to eliminate in order to preserve the martingale property in the insider's filtration, turns out to be...
Cost models of additive manufacturing: A literature review
Directory of Open Access Journals (Sweden)
G. Costabile
2016-11-01
Full Text Available From the past decades, increasing attention has been paid to the quality level of technological and mechanical properties achieved by the Additive Manufacturing (AM; these two elements have achieved a good performance, and it is possible to compare this with the results achieved by traditional technology. Therefore, the AM maturity is high enough to let industries adopt this technology in a more general production framework as the mechanical manufacturing industrial one is. Since the technological and mechanical properties are also beneficial for the materials produced with AM, the primary objective of this paper is to focus more on managerial facets, such as the cost control of a production environment, where these new technologies are present. This paper aims to analyse the existing literature about the cost models developed specifically for AM from an operations management point of view and discusses the strengths and weaknesses of all models.
Multiscale Modeling of Powder Bed-Based Additive Manufacturing
Markl, Matthias; Körner, Carolin
2016-07-01
Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.
Kinetics approach to modeling of polymer additive degradation in lubricants
Institute of Scientific and Technical Information of China (English)
llyaI.KUDISH; RubenG.AIRAPETYAN; Michael; J.; COVITCH
2001-01-01
A kinetics problem for a degrading polymer additive dissolved in a base stock is studied.The polymer degradation may be caused by the combination of such lubricant flow parameters aspressure, elongational strain rate, and temperature as well as lubricant viscosity and the polymercharacteristics (dissociation energy, bead radius, bond length, etc.). A fundamental approach tothe problem of modeling mechanically induced polymer degradation is proposed. The polymerdegradation is modeled on the basis of a kinetic equation for the density of the statistical distribu-tion of polymer molecules as a function of their molecular weight. The integrodifferential kineticequation for polymer degradation is solved numerically. The effects of pressure, elongational strainrate, temperature, and lubricant viscosity on the process of lubricant degradation are considered.The increase of pressure promotes fast degradation while the increase of temperature delaysdegradation. A comparison of a numerically calculated molecular weight distribution with an ex-perimental one obtained in bench tests showed that they are in excellent agreement with eachother.
Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model
Directory of Open Access Journals (Sweden)
Stelian Stancu
2008-06-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.
Akacha, Mouna; Hutton, Jane L
2011-05-10
The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.
Model Checking Vector Addition Systems with one zero-test
Bonet, Rémi; Leroux, Jérôme; Zeitoun, Marc
2012-01-01
We design a variation of the Karp-Miller algorithm to compute, in a forward manner, a finite representation of the cover (i.e., the downward closure of the reachability set) of a vector addition system with one zero-test. This algorithm yields decision procedures for several problems for these systems, open until now, such as place-boundedness or LTL model-checking. The proof techniques to handle the zero-test are based on two new notions of cover: the refined and the filtered cover. The refined cover is a hybrid between the reachability set and the classical cover. It inherits properties of the reachability set: equality of two refined covers is undecidable, even for usual Vector Addition Systems (with no zero-test), but the refined cover of a Vector Addition System is a recursive set. The second notion of cover, called the filtered cover, is the central tool of our algorithms. It inherits properties of the classical cover, and in particular, one can effectively compute a finite representation of this set, e...
WATEQ3 geochemical model: thermodynamic data for several additional solids
Energy Technology Data Exchange (ETDEWEB)
Krupka, K.M.; Jenne, E.A.
1982-09-01
Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.
Metal Big Area Additive Manufacturing: Process Modeling and Validation
Energy Technology Data Exchange (ETDEWEB)
Simunovic, Srdjan [ORNL; Nycz, Andrzej [ORNL; Noakes, Mark W [ORNL; Chin, Charlie [Dassault Systemes; Oancea, Victor [Dassault Systemes
2017-01-01
Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long range effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress analysis to
Geo-additive modelling of malaria in Burundi
Directory of Open Access Journals (Sweden)
Gebhardt Albrecht
2011-08-01
Full Text Available Abstract Background Malaria is a major public health issue in Burundi in terms of both morbidity and mortality, with around 2.5 million clinical cases and more than 15,000 deaths each year. It is still the single main cause of mortality in pregnant women and children below five years of age. Because of the severe health and economic burden of malaria, there is still a growing need for methods that will help to understand the influencing factors. Several studies/researches have been done on the subject yielding different results as which factors are most responsible for the increase in malaria transmission. This paper considers the modelling of the dependence of malaria cases on spatial determinants and climatic covariates including rainfall, temperature and humidity in Burundi. Methods The analysis carried out in this work exploits real monthly data collected in the area of Burundi over 12 years (1996-2007. Semi-parametric regression models are used. The spatial analysis is based on a geo-additive model using provinces as the geographic units of study. The spatial effect is split into structured (correlated and unstructured (uncorrelated components. Inference is fully Bayesian and uses Markov chain Monte Carlo techniques. The effects of the continuous covariates are modelled by cubic p-splines with 20 equidistant knots and second order random walk penalty. For the spatially correlated effect, Markov random field prior is chosen. The spatially uncorrelated effects are assumed to be i.i.d. Gaussian. The effects of climatic covariates and the effects of other spatial determinants are estimated simultaneously in a unified regression framework. Results The results obtained from the proposed model suggest that although malaria incidence in a given month is strongly positively associated with the minimum temperature of the previous months, regional patterns of malaria that are related to factors other than climatic variables have been identified
Detailed Theoretical Model for Adjustable Gain-Clamped Semiconductor Optical Amplifier
Directory of Open Access Journals (Sweden)
Lin Liu
2012-01-01
Full Text Available The adjustable gain-clamped semiconductor optical amplifier (AGC-SOA uses two SOAs in a ring-cavity topology: one to amplify the signal and the other to control the gain. The device was designed to maximize the output saturated power while adjusting gain to regulate power differences between packets without loss of linearity. This type of subsystem can be used for power equalisation and linear amplification in packet-based dynamic systems such as passive optical networks (PONs. A detailed theoretical model is presented in this paper to simulate the operation of the AGC-SOA, which gives a better understanding of the underlying gain clamping mechanics. Simulations and comparisons with steady-state and dynamic gain modulation experimental performance are given which validate the model.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
Controlling chaos using Takagi-Sugeno fuzzy model and adaptive adjustment
Institute of Scientific and Technical Information of China (English)
Zheng Yong-Ai
2006-01-01
In this paper, an approach to the control of continuous-time chaotic systems is proposed using the Takagi-Sugeno (TS) fuzzy model and adaptive adjustment. Sufficient conditions are derived to guarantee chaos control from Lyapunov stability theory. The proposed approach offers a systematic design procedure for stabilizing a large class of chaotic systems in the literature about chaos research. The simulation results on R(o)ssler's system verify the effectiveness of the proposed methods.
Institute of Scientific and Technical Information of China (English)
郭金运; 陶华学
2003-01-01
In order to process different kinds of observing data with different precisions, a new solution model of nonlinear dynamic integral least squares adjustment was put forward, which is not dependent on their derivatives. The partial derivative of each component in the target function is not computed while iteratively solving the problem. Especially when the nonlinear target function is more complex and very difficult to solve the problem, the method can greatly reduce the computing load.
Adjusting Felder-Silverman learning styles model for application in adaptive e-learning
Mihailović Đorđe; Despotović-Zrakić Marijana; Bogdanović Zorica; Barać Dušan; Vujin Vladimir
2012-01-01
This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organi...
Directory of Open Access Journals (Sweden)
Carvajal-Gamez
2012-09-01
Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.
Directory of Open Access Journals (Sweden)
B.E. Carvajal-Gámez
2012-08-01
Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.
Zhang, Y J; Xue, F X; Bai, Z P
2017-03-06
The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.
Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.
Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu
2015-11-01
Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational
Directory of Open Access Journals (Sweden)
Till D Frank
Full Text Available We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive
Improved Water Network Macroscopic Model Utilising Auto-Control Adjusting Valve by PLS
Institute of Scientific and Technical Information of China (English)
LI Xia; ZHAO Xinhua; WANG Xiaodong
2005-01-01
In order to overcome the low precision and weak applicability problems of the current municipal water network state simulation model, the water network structure is studied. Since the telemetry system has been applied increasingly in the water network, and in order to reflect the network operational condition more accurately, a new water network macroscopic model is developed by taking the auto-control adjusting valve opening state into consideration. Then for highly correlated or collinear independent variables in the model, the partial least squares (PLS) regression method provides a model solution which can distinguish between the system information and the noisy data. Finally, a hypothetical water network is introduced for validating the model. The simulation results show that the relative error is less than 5.2%, indicating that the model is efficient and feasible, and has better generalization performance.
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
A nonparametric dynamic additive regression model for longitudinal data
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas H.
2000-01-01
dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models......dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models...
Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction
Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.
2015-12-01
Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.
Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom
2012-01-01
Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.
Energy Technology Data Exchange (ETDEWEB)
Zheng Yongai, E-mail: zhengyongai@163.co [Department of Computer, Yangzhou University, Yangzhou, 225009 (China); Nian Yibei [School of Energy and Power Engineering, Yangzhou University, Yangzhou, 225009 (China); Wang Dejin [Department of Computer, Yangzhou University, Yangzhou, 225009 (China)
2010-12-01
In this Letter, a kind of novel model, called the generalized Takagi-Sugeno (T-S) fuzzy model, is first developed by extending the conventional T-S fuzzy model. Then, a simple but efficient method to control fractional order chaotic systems is proposed using the generalized T-S fuzzy model and adaptive adjustment mechanism (AAM). Sufficient conditions are derived to guarantee chaos control from the stability criterion of linear fractional order systems. The proposed approach offers a systematic design procedure for stabilizing a large class of fractional order chaotic systems from the literature about chaos research. The effectiveness of the approach is tested on fractional order Roessler system and fractional order Lorenz system.
Elizur, Y; Ziv, M
2001-01-01
While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.
Percolation model with an additional source of disorder
Kundu, Sumanta; Manna, S. S.
2016-06-01
The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.
Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory
DEFF Research Database (Denmark)
Li, Yan; Shuai, Zhikang; Xu, Qinming
2016-01-01
not only can avoid the active/reactive power coupling, but also it may reduce the output voltage drop of the PCC voltage. The proposed adjustable complex virtual impedance loop is putted into the conventional P/Q droop control to overcome the difficulty of getting the line impedance, which may change...... sometimes. The cloud model theory is applied to get online the changing line impedance value, which relies on the relevance of the reactive power responding the changing line impedance. The verification of the proposed control strategy is done according to the simulation in a low voltage microgrid in Matlab....
Muratore, Sydne; Statz, Catherine; Glover, J J; Kwaan, Mary; Beilman, Greg
2016-04-01
Colon surgical site infections (SSIs) are being utilized increasingly as a quality measure for hospital reimbursement and public reporting. The Centers for Medicare and Medicaid Services (CMS) now require reporting of colon SSI, which is entered through the U.S. Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). However, the CMS's model for determining expected SSIs uses different risk adjustment variables than does NHSN. We hypothesize that CMS's colon SSI model will predict lower expected infection rates than will NHSN. Colon SSI data were reported prospectively to NHSN from 2012-2014 for the six Fairview Hospitals (1,789 colon procedures). We compared expected quarterly SSIs and standardized infection ratios (SIRs) generated by CMS's risk-adjustment model (age and American Society of Anesthesiologist [ASA] classification) vs. NHSN's (age, ASA classification, procedure duration, endoscope [including laparoscope] use, medical school affiliation, hospital bed number, and incision class). The patients with more complex colon SSIs were more likely to be male (60% vs. 44%; p = 0.011), to have contaminated/dirty incisions (21% vs. 10%; p = 0.005), and to have longer operations (235 min vs. 156 min; p < 0.001) and were more likely to be at a medical school-affiliated hospital (53% vs. 40%; p = 0.032). For Fairview Hospitals combined, CMS calculated a lower number of expected quarterly SSIs than did the NHSN (4.58 vs. 5.09 SSIs/quarter; p = 0.002). This difference persisted in a university hospital (727 procedures; 2.08 vs. 2.33; p = 0.002) and a smaller, community-based hospital (565 procedures; 1.31 vs. 1.42; p = 0.002). There were two quarters in which CMS identified Fairview's SIR as an outlier for complex colon SSIs (p = 0.05 and 0.04), whereas NHSN did not (p = 0.06 and 0.06). The CMS's current risk-adjustment model using age and ASA classification predicts lower rates of expected colon
Harinath, Eranda; Mann, George K I
2008-06-01
This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system.
DEFF Research Database (Denmark)
Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.
2014-01-01
Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose...
Kumagai, K; Rouvelas, I; Tsai, J A; Mariosa, D; Lind, P A; Lindblad, M; Ye, W; Lundell, L; Schuhmacher, C; Mauer, M; Burmeister, B H; Thomas, J M; Stahl, M; Nilsson, M
2015-03-01
Several phase I/II studies of chemoradiotherapy for gastric cancer have reported promising results, but the significance of preoperative radiotherapy in addition to chemotherapy has not been proven. In this study, a systematic literature search was performed to capture survival and postoperative morbidity and mortality data in randomised clinical studies comparing preoperative (chemo)radiotherapy or chemotherapy versus surgery alone, or preoperative chemoradiotherapy versus chemotherapy for gastric and/or gastro-oesophageal junction (GOJ) cancer. Hazard ratios (HRs) for overall mortality were extracted from the original studies, individual patient data provided from the principal investigators of eligible studies or the earlier published meta-analysis. The incidences of postoperative morbidities and mortalities were also analysed. In total 18 studies were eligible and data were available from 14 of these. The meta-analysis on overall survival yielded HRs of 0.75 (95% CI 0.65-0.86, P < 0.001) for preoperative (chemo)radiotherapy and 0.83 (95% CI 0.67-1.01, P = 0.065) for preoperative chemotherapy when compared to surgery alone. Direct comparison between preoperative chemoradiotherapy and chemotherapy resulted in an HR of 0.71 (95% CI 0.45-1.12, P = 0.146). Combination of direct and adjusted indirect comparisons yielded an HR of 0.86 (95% CI 0.69-1.07, P = 0.171). No statistically significant differences were seen in the risk for postoperative morbidity or mortality between preoperative treatments and surgery alone, or preoperative (chemo)radiotherapy and chemotherapy. Preoperative (chemo)radiotherapy for gastric and GOJ cancer showed significant survival benefit over surgery alone. In comparisons between preoperative chemotherapy and (chemo)radiotherapy, there is a trend towards improved survival when adding radiotherapy, without increased postoperative morbidity or mortality.
Directory of Open Access Journals (Sweden)
C Elizabeth McCarron
Full Text Available BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses.
Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis
Institute of Scientific and Technical Information of China (English)
GENG Rui; CHENG Peng; CUI Deguang
2009-01-01
Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.
Directory of Open Access Journals (Sweden)
Ali P. Yunus
2016-04-01
Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.
Loeser, Meghan K.; Whiteman, Shawn D.; McHale, Susan M.
2016-01-01
Youth's perception of parents’ differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings (M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (−1 SD) and average levels of fairness. At high levels of fairness (+1 SD) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment. PMID:27867295
Loeser, Meghan K; Whiteman, Shawn D; McHale, Susan M
2016-08-01
Youth's perception of parents' differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings (M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (-1 SD) and average levels of fairness. At high levels of fairness (+1 SD) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment.
Directory of Open Access Journals (Sweden)
Samuel J Clark
Full Text Available A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.
Additive manufacturing for consumer-centric business models
DEFF Research Database (Denmark)
Bogers, Marcel; Hadar, Ronen; Bilberg, Arne
2016-01-01
and capturing value. In this paper, we explore the implications that AM technologies have for manufacturing systems in the new business models that they enable. In particular, we consider how a consumer goods manufacturer can organize the operations of a more open business model when moving from a manufacturer...
A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment
Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon
2016-05-01
We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.
Adjusting kinematics and kinetics in a feedback-controlled toe walking model
Directory of Open Access Journals (Sweden)
Olenšek Andrej
2012-08-01
Full Text Available Abstract Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can
Directory of Open Access Journals (Sweden)
Jun'ichi Okuno
2013-11-01
Full Text Available We present relative sea level (RSL curves in Antarctica derived from glacial isostatic adjustment (GIA predictions based on the melting scenarios of the Antarctic ice sheet since the Last Glacial Maximum (LGM given in previous works. Simultaneously, Holocene-age RSL observations obtained at the raised beaches along the coast of Antarctica are shown to be in agreement with the GIA predictions. The differences from previously published ice-loading models regarding the spatial distribution and total mass change of the melted ice are significant. These models were also derived from GIA modelling; the variations can be attributed to the lack of geological and geographical evidence regarding the history of crustal movement due to ice sheet evolution. Next, we summarise the previously published ice load models and demonstrate the RSL curves based on combinations of different ice and earth models. The RSL curves calculated by GIA models indicate that the model dependence of both the ice and earth models is significantly large at several sites where RSL observations were obtained. In particular, GIA predictions based on the thin lithospheric thickness show the spatial distributions that are dependent on the melted ice thickness at each sites. These characteristics result from the short-wavelength deformation of the Earth. However, our predictions strongly suggest that it is possible to find the average ice model despite the use of the different models of lithospheric thickness. By sea level and crustal movement observations, we can deduce the geometry of the post-LGM ice sheets in detail and remove the GIA contribution from the crustal deformation and gravity change observed by space geodetic techniques, such as GPS and GRACE, for the estimation of the Antarctic ice mass change associated with recent global warming.
Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.
2016-10-01
The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.
Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model
Directory of Open Access Journals (Sweden)
Ion Gh. Rosca
2007-08-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.
Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.
Gür, Y
2014-12-01
The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.
McKeown, Gary J; Sneddon, Ian
2014-03-01
Emotion research has long been dominated by the "standard method" of displaying posed or acted static images of facial expressions of emotion. While this method has been useful, it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose generalized additive models and generalized additive mixed models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The generalized additive mixed model approach is preferred, as it can account for autocorrelation in time series data and allows emotion decoding participants to be modeled as random effects. To increase confidence in linear differences, we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition, we provide comments on the use of generalized additive models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally, we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.
Primary circuit iodine model addition to IMPAIR-3
Energy Technology Data Exchange (ETDEWEB)
Osetek, D.J.; Louie, D.L.Y. [Los Alamos Technical Associates, Inc., Albuquerque, NM (United States); Guntay, S.; Cripps, R. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1996-12-01
As part of a continuing effort to provide the U.S. Department of Energy (DOE) Advanced Reactor Severe Accident Program (ARSAP) with complete iodine analysis capability, a task was undertaken to expand the modeling of IMPAIR-3, an iodine chemistry code. The expanded code will enable the DOE to include detailed iodine behavior in the assessment of severe accident source terms used in the licensing of U.S. Advanced Light Water Reactors (ALWRs). IMPAIR-3 was developed at the Paul Scherrer Institute (PSI), Switzerland, and has been used by ARSAP for the past two years to analyze containment iodine chemistry for ALWR source term analyses. IMPAIR-3 is primarily a containment code but the iodine chemistry inside the primary circuit (the Reactor Coolant System or RCS) may influence the iodine species released into the the containment; therefore, a RCS iodine chemistry model must be implemented in IMPAIR-3 to ensure thorough source term analysis. The ARSAP source term team and the PSI IMPAIR-3 developers are working together to accomplish this task. This cooperation is divided into two phases. Phase I, taking place in 1996, involves developing a stand-alone RCS iodine chemistry program called IMPRCS (IMPAIR -Reactor Coolant System). This program models a number of the chemical and physical processes of iodine that are thought to be important at conditions of high temperature and pressure in the RCS. In Phase II, which is tentatively scheduled for 1997, IMPRCS will be implemented as a subroutine in IMPAIR-3. To ensure an efficient calculation, an interface/tracking system will be developed to control the use of the RCS model from the containment model. These two models will be interfaced in such a way that once the iodine is released from the RCS, it will no longer be tracked by the RCS model but will be tracked by the containment model. All RCS thermal-hydraulic parameters will be provided by other codes. (author) figs., tabs., refs.
Analysis of time to event outcomes in randomized controlled trials by generalized additive models.
Directory of Open Access Journals (Sweden)
Christos Argyropoulos
Full Text Available Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and
Non-additive model for specific heat of electrons
Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.
2016-10-01
By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.
Richly parameterized linear models additive, time series, and spatial models using random effects
Hodges, James S
2013-01-01
A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut
Additional Research Needs to Support the GENII Biosphere Models
Energy Technology Data Exchange (ETDEWEB)
Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen
2013-11-30
In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed
Adjusting Felder-Silverman learning styles model for application in adaptive e-learning
Directory of Open Access Journals (Sweden)
Mihailović Đorđe
2012-01-01
Full Text Available This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organizational sciences, University of Belgrade, during winter semester of 2009/10, on sample of 318 students. The students from the experimental group were divided in three clusters, based on data about their styles identified using adjusted Felder-Silverman questionnaire. Data about learning styles collected during the research were used to determine typical groups of students and then to classify students into these groups. The classification was performed using data mining techniques. Adaptation of the e-learning courses was implemented according to results of data analysis. Evaluation showed that there was statistically significant difference in the results of students who attended the course adapted by using the described method, in comparison with results of students who attended course that was not adapted.
Adjustment and Development of Health User’s Mental Model Completeness Scale in Search Engines
Directory of Open Access Journals (Sweden)
Maryam Nakhoda
2016-10-01
Full Text Available Introduction: Users’ performance and their interaction with information retrieval systems can be observed in development of their mental models. Users, especially users of health, use mental models to facilitate their interactions with these systems and incomplete or incorrect models can cause problems for them . The aim of this study was the adjustment and development of health user’s mental model completeness scale in search engines. Method: This quantitative study uses Delphi method. Among various scales for users’ mental model completeness, Li’s scale was selected and some items were added to this scale based on previous valid literature. Delphi panel members were selected using purposeful sampling method, consisting of 20 and 18 participants in the first and second rounds, respectively. Kendall’s Coefficient of Concordance in SPSS version 16 was used as basis for agreement (95% confidence. Results:The Kendall coefficient of Concordance (W was calculated to be 0.261(P-value<0.001 for the first and 0.336 (P-value<0.001 for the second round. Therefore, the study was found to be statistically significant with 95% confidence. Since the increase in the coefficient in two consecutive rounds was very little (equal to 0.075, surveying the panel members were stopped based on second Schmidt criterion and Delphi method was stopped after the second round. Finally, the dimensions of Li’s scale (existence and nature, search characteristics and levels of interaction were confirmed again, but “indexing of pages or websites” was eliminated and “Difference between results of different search engines”, “possibility of access to similar or related webpages”, and “possibility of search for special formats and multimedia” were added to Li’s scale. Conclusion: In this study, the scale for mental model completeness of health users was adjusted and developed; it can help the designers of information retrieval systems in systematic
A data-driven model of present-day glacial isostatic adjustment in North America
Simon, Karen; Riva, Riccardo
2016-04-01
Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr
Wang, Huai-Chun; Susko, Edward; Roger, Andrew J
2014-04-01
Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection
Ding, Lili; Kurowski, Brad G; He, Hua; Alexander, Eileen S.; Mersha, Tesfaye B.; Fardo, David W.; Zhang, Xue; Pilipenko, Valentina V; Kottyan, Leah; Martin, Lisa J.
2014-01-01
Genetic studies often collect data on multiple traits. Most genetic association analyses, however, consider traits separately and ignore potential correlation among traits, partially because of difficulties in statistical modeling of multivariate outcomes. When multiple traits are measured in a pedigree longitudinally, additional challenges arise because in addition to correlation between traits, a trait is often correlated with its own measures over time and with measurements of other family...
The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment
Borja, Susan E.; Callahan, Jennifer L.
2009-01-01
This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
Generalized Additive Models, Cubic Splines and Penalized Likelihood.
1987-05-22
in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performanc
A risk-adjusted CUSUM in continuous time based on the Cox model.
Biswas, Pinaki; Kalbfleisch, John D
2008-07-30
In clinical practice, it is often important to monitor the outcomes associated with participating facilities. In organ transplantation, for example, it is important to monitor and assess the outcomes of the transplants performed at the participating centers and send a signal if a significant upward trend in the failure rates is detected. In manufacturing and process control contexts, the cumulative summation (CUSUM) technique has been used as a sequential monitoring scheme for some time. More recently, the CUSUM has also been suggested for use in medical contexts. In this article, we outline a risk-adjusted CUSUM procedure based on the Cox model for a failure time outcome. Theoretical approximations to the average run length are obtained for this new proposal and for some discrete time procedures suggested in the literature. The proposed scheme and approximations are evaluated in simulations and illustrated on transplant facility data from the Scientific Registry of Transplant Recipients.
The use of satellites in gravity field determination and model adjustment
Visser, Petrus Nicolaas Anna Maria
1992-06-01
Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.
Institute of Scientific and Technical Information of China (English)
Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan
2015-01-01
This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.
UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA
Energy Technology Data Exchange (ETDEWEB)
Davis, S.C.
2000-11-16
The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.
Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack
2016-05-27
Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.
Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A
2012-06-21
Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but
Directory of Open Access Journals (Sweden)
Zhenwei Huang
2013-01-01
Full Text Available Arctic Ocean sea-level change is an important indicator of climate change. Contemporary geodetic observations, including data from tide gages, satellite altimetry and Gravity Recovery and Climate Experiment (GRACE, are sensitive to the effect of the ongoing glacial isostatic adjustment (GIA process. To fully exploit these geodetic observations to study climate related sea-level change, this GIA effect has to be removed. However, significant uncertainty exists with regard to the GIA model, and using different GIA models could lead to different results. In this study we use an ensemble of 14 contemporary GIA models to investigate their differences when they are applied to the above-mentioned geodetic observations to estimate sea-level change in the Arctic Ocean. We find that over the Arctic Ocean a large range of differences exists in GIA models when they are used to remove GIA effect from tide gage and GRACE observations, but with a relatively smaller range for satellite altimetry observations. In addition, we compare the derived sea-level trend from observations after applying different GIA models in the study regions, sea-level trend estimated from long-term tide gage data shows good agreement with altimetry result over the same data span. However the mass component of sea-level change obtained from GRACE data does not agree well with the result derived from steric-corrected altimeter observation due primarily to the large uncertainty of GIA models, errors in the Arctic Ocean altimetry or steric measurements, inadequate data span, or all of the above. We conclude that GIA correction is critical for studying sea-level change over the Arctic Ocean and further improvement in GIA modelling is needed to reduce the current discrepancies among models.
Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland
Steffen, R.; Lund, B.
2015-12-01
The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.
DEFF Research Database (Denmark)
Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.;
2014-01-01
Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose......-response relationships for clinical risk factors was employed. Effect size estimates (odds ratios) for risk factors were drawn from a recently published meta-analysis. Baseline values for D50 and γ50 were found. The method was tested in an independent dataset (103 patients), comparing the predictive power of the dose......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...
A covariate-adjustment regression model approach to noninferiority margin definition.
Nie, Lei; Soon, Guoxing
2010-05-10
To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well-developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate-adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate-adjustment approach through a case study.
National Research Council Canada - National Science Library
Chen Yuexia; Chen Long; Wang Ruochen; Xu Xing; Shen Yujie; Liu Yanling
2016-01-01
To reduce the damages of pavement, vehicle components and agricultural product during transportation, an electric control air suspension height adjustment system of agricultural transport vehicle...
Punamäki, R L; Qouta, S; el Sarraj, E
1997-08-01
The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
2016-11-01
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
Wang, Zi-han; Wang, Chun-mei; Tang, Hua-xin; Zuo, Cheng-ji; Xu, Hong-ming
2009-06-01
Ignition timing control is of great importance in homogeneous charge compression ignition engines. The effect of hydrogen addition on methane combustion was investigated using a CHEMKIN multi-zone model. Results show that hydrogen addition advances ignition timing and enhances peak pressure and temperature. A brief analysis of chemical kinetics of methane blending hydrogen is also performed in order to investigate the scope of its application, and the analysis suggests that OH radical plays an important role in the oxidation. Hydrogen addition increases NOx while decreasing HC and CO emissions. Exhaust gas recirculation (EGR) also advances ignition timing; however, its effects on emissions are generally the opposite. By adjusting the hydrogen addition and EGR rate, the ignition timing can be regulated with a low emission level. Investigation into zones suggests that NOx is mostly formed in core zones while HC and CO mostly originate in the crevice and the quench layer.
Institute of Scientific and Technical Information of China (English)
Zi-han Wang; Chun-mei Wang; Hua-xin Tang; Cheng-ji Zuo; Hong-ming Xu
2009-01-01
Ignition timing control is of great importance in homogeneous charge compression ignition engines. The effect of hydrogen addition on methane combustion was investigated using a CHEMKIN multi-zone model. Results show that hydrogen addition advances ignition tim-ing and enhances peak pressure and temperature. A brief analysis of chemical kinetics of methane blending hydrogen is also performed in order to investigate the scope of its appli-cation, and the analysis suggests that OH radical plays an important role in the oxidation. Hydrogen addition increases NO while decreasing HC and CO emissions. Exhaust gas recir-culation (EGR) also advances ignition timing; however, its effects on emissions are generally the opposite. By adjusting the hydrogen addition and EGR rate, the ignition timing can be regulated with a low emission level. Investigation into zones suggests that NO is mostly formed in core zones while HC and CO mostly originate in the crevice and the quench layer.
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
Dynamic Stall Prediction of a Pitching Airfoil using an Adjusted Two-Equation URANS Turbulence Model
Directory of Open Access Journals (Sweden)
Galih Bangga
2017-01-01
Full Text Available The necessity in the analysis of dynamic stall becomes increasingly important due to its impact on many streamlined structures such as helicopter and wind turbine rotor blades. The present paper provides Computational Fluid Dynamics (CFD predictions of a pitching NACA 0012 airfoil at reduced frequency of 0.1 and at small Reynolds number value of 1.35e5. The simulations were carried out by adjusting the k − ε URANS turbulence model in order to damp the turbulence production in the near wall region. The damping factor was introduced as a function of wall distance in the buffer zone region. Parametric studies on the involving variables were conducted and the effect on the prediction capability was shown. The results were compared with available experimental data and CFD simulations using some selected two-equation turbulence models. An improvement of the lift coefficient prediction was shown even though the results still roughly mimic the experimental data. The flow development under the dynamic stall onset was investigated with regards to the effect of the leading and trailing edge vortices. Furthermore, the characteristics of the flow at several chords length downstream the airfoil were evaluated.
Age-period-cohort models using smoothing splines: a generalized additive model approach.
Jiang, Bei; Carriere, Keumhee C
2014-02-20
Age-period-cohort (APC) models are used to analyze temporal trends in disease or mortality rates, dealing with linear dependency among associated effects of age, period, and cohort. However, the nature of sparseness in such data has severely limited the use of APC models. To deal with these practical limitations and issues, we advocate cubic smoothing splines. We show that the methods of estimable functions proposed in the framework of generalized linear models can still be considered to solve the non-identifiability problem when the model fitting is within the framework of generalized additive models with cubic smoothing splines. Through simulation studies, we evaluate the performance of the cubic smoothing splines in terms of the mean squared errors of estimable functions. Our results support the use of cubic smoothing splines for APC modeling with sparse but unaggregated data from a Lexis diagram.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
ADJUSTMENT FACTORS AND ADJUSTMENT STRUCTURE
Institute of Scientific and Technical Information of China (English)
Tao Benzao
2003-01-01
In this paper, adjustment factors J and R put forward by professor Zhou Jiangwen are introduced and the nature of the adjustment factors and their role in evaluating adjustment structure is discussed and proved.
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.
2002-01-01
The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4
Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.
2002-01-01
The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4
Combining an additive and tree-based regression model simultaneously: STIMA
Dusseldorp, E.; Conversano, C.; Os, B.J. van
2010-01-01
Additive models and tree-based regression models are two main classes of statistical models used to predict the scores on a continuous response variable. It is known that additive models become very complex in the presence of higher order interaction effects, whereas some tree-based models, such as
DEFF Research Database (Denmark)
M. Gaspar, Raquel; Murgoci, Agatha
2010-01-01
of particular importance to practitioners: yield convexity adjustments, forward versus futures convexity adjustments, timing and quanto convexity adjustments. We claim that the appropriate way to look into any of these adjustments is as a side effect of a measure change, as proposed by Pelsser (2003...
An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model
Purcell, A.; Tregoning, P.; Dehecq, A.
2016-05-01
The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.
Directory of Open Access Journals (Sweden)
Belehaki Anna
2012-12-01
Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.
Bulk Density Adjustment of Resin-Based Equivalent Material for Geomechanical Model Test
Directory of Open Access Journals (Sweden)
Pengxian Fan
2015-01-01
Full Text Available An equivalent material is of significance to the simulation of prototype rock in geomechanical model test. Researchers attempt to ensure that the bulk density of equivalent material is equal to that of prototype rock. In this work, barite sand was used to increase the bulk density of a resin-based equivalent material. The variation law of the bulk density was revealed in the simulation of a prototype rock of a different bulk density. Over 300 specimens were made for uniaxial compression test. Test results indicated that the substitution of quartz sand by barite sand had no apparent influence on the uniaxial compressive strength and elastic modulus of the specimens but can increase the bulk density, according to the proportional coarse aggregate content. An ideal linearity was found in the relationship between the barite sand substitution ratio and the bulk density. The relationship between the bulk density and the usage of coarse aggregate and barite sand was also presented. The test results provided an insight into the bulk density adjustment of resin-based equivalent materials.
Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.
2008-07-01
The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.
Institute of Scientific and Technical Information of China (English)
Zeeshan Ahmad; Meng Jun
2015-01-01
DEA is a nonparametric method used in operation researches and economics fields for the evaluation of the production frontier. It has distinct intrinsic which is worth coping with assessment problems with multiple inputs in particular with multiple outputs. This paper usedDεC2R model of DEA to assess the comparative efficiency of the multiple schemes of agricultural industrial structure, at the end we chose the most favorable also known as "OPTIMAL" scheme. In addition to this, using some functional insights from DEA model non optimal schemes or less optimal schemes had also been improved to some extent. Assessment and selection of optimal schemes of agricultural industrial structure using DEA model gave a greater and better insight of agricultural industrial structure and was the first of such researches in Pakistan.
Energy Technology Data Exchange (ETDEWEB)
Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.
2016-11-01
This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)
Directory of Open Access Journals (Sweden)
Henryk Ratajkiewicz
2016-08-01
Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.
Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.
2009-07-01
The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.
Delanaud, Stéphane; Decima, Pauline; Pelletier, Amandine; Libert, Jean-Pierre; Stephan-Blanchard, Erwan; Bach, Véronique; Tourneux, Pierre
2016-09-01
Radiant heat loss is high in low-birth-weight (LBW) neonates. Double-wall or single-wall incubators with an additional double-wall roof panel that can be removed during phototherapy are used to reduce Radiant heat loss. There are no data on how the incubators should be used when this second roof panel is removed. The aim of the study was to assess the heat exchanges in LBW neonates in a single-wall incubator with and without an additional roof panel. To determine the optimal thermoneutral incubator air temperature. Influence of the additional double-wall roof was assessed by using a thermal mannequin simulating a LBW neonate. Then, we calculated the optimal incubator air temperature from a cohort of human LBW neonate in the absence of the additional roof panel. Twenty-three LBW neonates (birth weight: 750-1800g; gestational age: 28-32 weeks) were included. With the additional roof panel, R was lower but convective and evaporative skin heat losses were greater. This difference can be overcome by increasing the incubator air temperature by 0.15-0.20°C. The benefit of an additional roof panel was cancelled out by greater body heat losses through other routes. Understanding the heat transfers between the neonate and the environment is essential for optimizing incubators.
A general additive-multiplicative rates model for recurrent event data
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this article, we propose a general additive-multiplicative rates model for recurrent event data. The proposed model includes the additive rates and multiplicative rates models as special cases. For the inference on the model parameters, estimating equation approaches are developed, and asymptotic properties of the proposed estimators are established through modern empirical process theory. In addition, an illustration with multiple-infection data from a clinic study on chronic granulomatous disease is provided.
Validation analysis of probabilistic models of dietary exposure to food additives.
Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J
2003-10-01
The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.
Energy Technology Data Exchange (ETDEWEB)
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)
2015-12-15
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
Deutsch, Anne; Pardasaney, Poonam; Iriondo-Perez, Jeniffer; Ingber, Melvin J; Porter, Kristie A; McMullen, Tara
2017-07-01
Functional status measures are important patient-centered indicators of inpatient rehabilitation facility (IRF) quality of care. We developed a risk-adjusted self-care functional status measure for the IRF Quality Reporting Program. This paper describes the development and performance of the measure's risk-adjustment model. Our sample included IRF Medicare fee-for-service patients from the Centers for Medicare & Medicaid Services' 2008-2010 Post-Acute Care Payment Reform Demonstration. Data sources included the Continuity Assessment Record and Evaluation Item Set, IRF-Patient Assessment Instrument, and Medicare claims. Self-care scores were based on 7 Continuity Assessment Record and Evaluation items. The model was developed using discharge self-care score as the dependent variable, and generalized linear modeling with generalized estimation equation to account for patient characteristics and clustering within IRFs. Patient demographics, clinical characteristics at IRF admission, and clinical characteristics related to the recent hospitalization were tested as risk adjusters. A total of 4769 patient stays from 38 IRFs were included. Approximately 57% of the sample was female; 38.4%, 75-84 years; and 31.0%, 65-74 years. The final model, containing 77 risk adjusters, explained 53.7% of variance in discharge self-care scores (P<0.0001). Admission self-care function was the strongest predictor, followed by admission cognitive function and IRF primary diagnosis group. The range of expected and observed scores overlapped very well, with little bias across the range of predicted self-care functioning. Our risk-adjustment model demonstrated strong validity for predicting discharge self-care scores. Although the model needs validation with national data, it represents an important first step in evaluation of IRF functional outcomes.
Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.
2010-01-01
The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…
King, M.A.; Altamimi, Z.; Boehm, J.; Bos, M.; Dach, R.; Elosegui, P. Fund, F.; Hernández-Pajares, M.; Lavallee, D.; Riva, E.M.; et al.
2010-01-01
The provision of accurate models of Glacial Isostatic Adjustment (GIA) is presently a priority need in climate studies, largely due to the potential of the Gravity Recovery and Climate Experiment (GRACE) data to be used to determine accurate and continent-wide assessments of ice mass change and hydr
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
2013-01-01
Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
2013-01-01
Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…
DEFF Research Database (Denmark)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the cov......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...
Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.
2011-01-01
The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…
Directory of Open Access Journals (Sweden)
Kyle B Enfield
Full Text Available BACKGROUND: Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. METHODS: We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. RESULTS: We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8% and 24.6% (95% CI 22.7%, 26.5% (t-test p-value<0.001. The r(2 for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. CONCLUSIONS: In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results
Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand
Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat
2016-07-01
The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the
van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.
2009-05-01
GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice
A class of additive-accelerated means regression models for recurrent event data
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this article, we propose a class of additive-accelerated means regression models for analyzing recurrent event data. The class includes the proportional means model, the additive rates model, the accelerated failure time model, the accelerated rates model and the additive-accelerated rate model as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the model parameters, estimating equation approaches are derived and asymptotic properties of the proposed estimators are established. In addition, a technique is provided for model checking. The finite-sample behavior of the proposed methods is examined through Monte Carlo simulation studies, and an application to a bladder cancer study is illustrated.
Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff
2008-01-01
Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…
A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models
Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu
2012-01-01
This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…
Multiple High-Fidelity Modeling Tools for Metal Additive Manufacturing Process Development Project
National Aeronautics and Space Administration — Despite the rapid commercialization of additive manufacturing technology such as selective laser melting, SLM, there are gaps in process modeling and material...
USING R TO TEACH SEASONAL ADJUSTMENT
Directory of Open Access Journals (Sweden)
Pedro Costa Ferreira
2017-03-01
Full Text Available This article shows, using R software, how to seasonally adjust a time series using the X13-ARIMA-SEATS program and the seasonal package developed by Christoph Sax. In addition to presenting step-by-step seasonal adjustment, the article also explores how to analyze the program output and how to forecast the original and seasonally adjusted time series. A case study was proposed using the Brazilian industrial production. It was verified that the effect of Carnival, Easter and working days improved the seasonal adjustment when treated by the model.
Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J
2016-07-01
The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.
Seufzer, William J.
2014-01-01
Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.
Stiffness Model of a 3-DOF Parallel Manipulator with Two Additional Legs
Directory of Open Access Journals (Sweden)
Guang Yu
2014-10-01
Full Text Available This paper investigates the stiffness modelling of a 3-DOF parallel manipulator with two additional legs. The stiffness model in six directions of the 3-DOF parallel manipulator with two additional legs is derived by performing condensation of DOFs for the joint connection and treatment of the fixed-end connections. Moreover, this modelling method is used to derive the stiffness model of the manipulator with zero/one additional legs. Two performance indices are given to compare the stiffness of the parallel manipulators with two additional legs with those of the manipulators with zero/one additional legs. The method not only can be used to derive the stiffness model of a redundant parallel manipulator, but also to model the stiffness of non-redundant parallel manipulators.
Li, Huiyang; Fang, Manman; Hou, Yingqin; Tang, Runli; Yang, Yizhou; Zhong, Cheng; Li, Qianqian; Li, Zhen
2016-05-18
Four organic sensitizers (LI-68-LI-71) bearing various conjugated bridges were designed and synthesized, in which the only difference between LI-68 and LI-69 (or LI-70 and LI-71) was the absence/presence of the CN group as the auxiliary electron acceptor. Interestingly, compared to the reference dye of LI-68, LI-69 bearing the additional CN group exhibited the bad performance with the decreased Jsc and Voc values. However, once one thiophene moiety near the anchor group was replaced by pyrrole with the electron-rich property, the resultant LI-71 exhibited a photoelectric conversion efficiency increase by about 3 folds from 2.75% (LI-69) to 7.95% (LI-71), displaying the synergistic effect of the two moieties (CN and pyrrole). Computational analysis disclosed that pyrrole as the auxiliary electron donor (D') in the conjugated bridge can compensate for the lower negative charge in the electron acceptor, which was caused by the CN group as the electron trap, leading to the more efficient electron injection and better photovoltaic performance.
How do attachment dimensions affect bereavement adjustment? A mediation model of continuing bonds.
Yu, Wei; He, Li; Xu, Wei; Wang, Jianping; Prigerson, Holly G
2016-04-30
The current study aims to examine mechanisms underlying the impact of attachment dimensions on bereavement adjustment. Bereaved mainland Chinese participants (N=247) completed anonymous, retrospective, self-report surveys assessing attachment dimensions, continuing bonds (CB), grief symptoms and posttraumatic growth (PTG). Results demonstrated that attachment anxiety predicted grief symptoms via externalized CB and predicted PTG via internalized CB at the same time, whereas attachment avoidance positively predicted grief symptoms via externalized CB but negatively predicted PTG directly. Findings suggested that individuals with a high level of attachment anxiety could both suffer from grief and obtain posttraumatic growth after loss, but it depended on which kind of CB they used. By contrast, attachment avoidance was associated with a heightened risk of maladaptive bereavement adjustment. Future grief therapy may encourage the bereaved to establish CB with the deceased and gradually shift from externalized CB to internalized CB.
Business cycle effects on portfolio credit risk: A simple FX Adjustment for a factor model
Sokolov, Yuri
2010-01-01
The recent economic crisis on the demand side of the economy affects the trends and volatilities of the exchange rates as well as the operating conditions of borrowers in emerging market economies. But the exchange rate depreciation creates both winners and losers. With a weaker exchange rate, exporters and net holders of foreign assets will benefit, and vice verse, those relying on import and net debtors in foreign currency will be hurt. This paper presents a simple FX adjustment framewor...
Chung, Ha-Yeun; Kollmey, Anna S; Schrepper, Andrea; Kohl, Matthias; Bläss, Markus F; Stehr, Sebastian N; Lupp, Amelie; Gräler, Markus H; Claus, Ralf A
2017-04-15
Cardiac dysfunction, in particular of the left ventricle, is a common and early event in sepsis, and is strongly associated with an increase in patients' mortality. Acid sphingomyelinase (SMPD1)-the principal regulator for rapid and transient generation of the lipid mediator ceramide-is involved in both the regulation of host response in sepsis as well as in the pathogenesis of chronic heart failure. This study determined the degree and the potential role to which SMPD1 and its modulation affect sepsis-induced cardiomyopathy using both genetically deficient and pharmacologically-treated animals in a polymicrobial sepsis model. As surrogate parameters of sepsis-induced cardiomyopathy, cardiac function, markers of oxidative stress as well as troponin I levels were found to be improved in desipramine-treated animals, desipramine being an inhibitor of ceramide formation. Additionally, ceramide formation in cardiac tissue was dysregulated in SMPD1(+/+) as well as SMPD1(-/-) animals, whereas desipramine pretreatment resulted in stable, but increased ceramide content during host response. This was a result of elevated de novo synthesis. Strikingly, desipramine treatment led to significantly improved levels of surrogate markers. Furthermore, similar results in desipramine-pretreated SMPD1(-/-) littermates suggest an SMPD1-independent pathway. Finally, a pattern of differentially expressed transcripts important for regulation of apoptosis as well as antioxidative and cytokine response supports the concept that desipramine modulates ceramide formation, resulting in beneficial myocardial effects. We describe a novel, protective role of desipramine during sepsis-induced cardiac dysfunction that controls ceramide content. In addition, it may be possible to modulate cardiac function during host response by pre-conditioning with the Food and Drug Administration (FDA)-approved drug desipramine.
Horton, B.P.; Peltier, W.R.; Culver, S.J.; Drummond, R.; Engelhart, S.E.; Kemp, A.C.; Mallinson, D.; Thieler, E.R.; Riggs, S.R.; Ames, D.V.; Thomson, K.H.
2009-01-01
We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ?? 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ?? 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ?? 0.03 mm year-1 and 0.82 ?? 0.02 mm year-1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL
Energy Technology Data Exchange (ETDEWEB)
Faille, D.; Codrons, B.; Gevers, M.
1996-03-01
This document belongs to the methodological part of the project MISTRAL, which builds a library of power plant models. The model equations are generally obtained from the first principles. The parameters are actually not always easily calculable (at least accurately) from the dimension data. We are therefore investigating the possibility of automatically adjusting the value of those parameters from experimental data. To do that, we must master the optimization algorithms and the techniques that are analyzing the model structure, like the identifiability theory. (authors). 7 refs., 1 fig., 1 append.
Institute of Scientific and Technical Information of China (English)
ZHONG Yi-feng; WANG Rui; YING Xue-gang; CHEN Huai
2006-01-01
In this paper, we established a finite element (FEM) model to analyze the dynamic characteristics of arch bridges. In this model, the effects of adjustment to the length of a suspender on its geometry stiffness matrix are stressed. The FEM equations of mechanics characteristics, natural frequency and main mode are set up based on the first order matrix perturbation theory. Applicantion of the proposed model to analyze a real arch bridge proved the improvement in the simulation precision of dynamical characteristics of the arch bridge by considering the effects of suspender length variation.
Benchmarking Judgmentally Adjusted Forecasts
Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)
2017-01-01
textabstractMany publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. In practice, usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights given
Benchmarking judgmentally adjusted forecasts
Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)
2015-01-01
markdownabstractMany publicly available macroeconomic forecasts are judgmentally-adjusted model-based forecasts. In practice usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights
Benchmarking judgmentally adjusted forecasts
Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)
2015-01-01
markdownabstractMany publicly available macroeconomic forecasts are judgmentally-adjusted model-based forecasts. In practice usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights give
Jiang, Xianan; Zhao, Ming; Maloney, Eric D.; Waliser, Duane E.
2016-10-01
Despite its pronounced impacts on weather extremes worldwide, the Madden-Julian Oscillation (MJO) remains poorly represented in climate models. Here we present findings that point to some necessary ingredients to produce a strong MJO amplitude in a large set of model simulations from a recent model intercomparison project. While surface flux and radiative heating anomalies are considered important for amplifying the MJO, their strength per unit MJO precipitation anomaly is found to be negatively correlated to MJO amplitude across these multimodel simulations. However, model MJO amplitude is found to be closely tied to a model's convective moisture adjustment time scale, a measure of how rapidly precipitation must increase to remove excess column water vapor, or alternately the efficiency of surface precipitation generation per unit column water vapor anomaly. These findings provide critical insights into key model processes for the MJO and pinpoint a direction for improved model representation of the MJO.
Lopez-Duran, Nestor L; Mayer, Stefanie E; Abelson, James L
2014-07-01
In this report, we present growth curve modeling (GCM) with landmark registration as an alternative statistical approach for the analysis of time series cortisol data. This approach addresses an often-ignored but critical source of variability in salivary cortisol analyses: individual and group differences in the time latency of post-stress peak concentrations. It allows for the simultaneous examination of cortisol changes before and after the peak while controlling for timing differences, and thus provides additional information that can help elucidate group differences in the underlying biological processes (e.g., intensity of response, regulatory capacity). We tested whether GCM with landmark registration is more sensitive than traditional statistical approaches (e.g., repeated measures ANOVA--rANOVA) in identifying sex differences in salivary cortisol responses to a psychosocial stressor (Trier Social Stress Test--TSST) in healthy adults (mean age 23). We used plasma ACTH measures as our "standard" and show that the new approach confirms in salivary cortisol the ACTH finding that males had longer peak latencies, higher post-stress peaks but a more intense post-peak decline. This finding would have been missed if only saliva cortisol was available and only more traditional analytic methods were used. This new approach may provide neuroendocrine researchers with a highly sensitive complementary tool to examine the dynamics of the cortisol response in a way that reduces risk of false negative findings when blood samples are not feasible.
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.
Talerngsak Angkuraseranee
2010-01-01
The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA), birth weight (BW), number weaned (NW), andweaning weight (WW). Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but...
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
Return predictability and intertemporal asset allocation: Evidence from a bias-adjusted VAR model
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard
We extend the VAR based intertemporal asset allocation approach from Campbell et al. (2003) to the case where the VAR parameter estimates are adjusted for small- sample bias. We apply the analytical bias formula from Pope (1990) using both Campbell et al.'s dataset, and an extended dataset...... with quarterly data from 1952 to 2006. The results show that correcting the VAR parameters for small-sample bias has both quantitatively and qualitatively important e¤ects on the strategic intertemporal part of optimal portfolio choice, especially for bonds: for intermediate values of risk...
Additive Hazard Regression Models: An Application to the Natural History of Human Papillomavirus
Directory of Open Access Journals (Sweden)
Xianhong Xie
2013-01-01
Full Text Available There are several statistical methods for time-to-event analysis, among which is the Cox proportional hazards model that is most commonly used. However, when the absolute change in risk, instead of the risk ratio, is of primary interest or when the proportional hazard assumption for the Cox proportional hazards model is violated, an additive hazard regression model may be more appropriate. In this paper, we give an overview of this approach and then apply a semiparametric as well as a nonparametric additive model to a data set from a study of the natural history of human papillomavirus (HPV in HIV-positive and HIV-negative women. The results from the semiparametric model indicated on average an additional 14 oncogenic HPV infections per 100 woman-years related to CD4 count < 200 relative to HIV-negative women, and those from the nonparametric additive model showed an additional 40 oncogenic HPV infections per 100 women over 5 years of followup, while the estimated hazard ratio in the Cox model was 3.82. Although the Cox model can provide a better understanding of the exposure disease association, the additive model is often more useful for public health planning and intervention.
Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.
David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C
2009-08-01
The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.
A limit-cycle model of leg movements in cross-country skiing and its adjustments with fatigue.
Cignetti, F; Schena, F; Mottet, D; Rouard, A
2010-08-01
Using dynamical modeling tools, the aim of the study was to establish a minimal model reproducing leg movements in cross-country skiing, and to evaluate the eventual adjustments of this model with fatigue. The participants (N=8) skied on a treadmill at 90% of their maximal oxygen consumption, up to exhaustion, using the diagonal stride technique. Qualitative analysis of leg kinematics portrayed in phase planes, Hooke planes, and velocity profiles suggested the inclusion in the model of a linear stiffness and an asymmetric van der Pol-type nonlinear damping. Quantitative analysis revealed that this model reproduced the observed kinematics patterns of the leg with adequacy, accounting for 87% of the variance. A rising influence of the stiffness term and a dropping influence of the damping terms were also evidenced with fatigue. The meaning of these changes was discussed in the framework of motor control.
... syndrome) A certain type of stroke (vertebral artery dissection) after neck manipulation Don't seek chiropractic adjustment ... Chiropractic treatment. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2015. Shekelle P, et al. Spinal ...
DEFF Research Database (Denmark)
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;
2013-01-01
, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...
Walker, Stefan P W; Thibaut, Loïc; McCormick, Mark I
2010-09-01
Positive density dependence (i.e., the Allee effect; AE) often has important implications for the dynamics and conservation of populations. Here, we show that density-dependent sex ratio adjustment in response to sexual selection may be a common AE mechanism. Specifically, using an analytical model we show that an AE is expected whenever one sex is more fecund than the other and sex ratio bias toward the less fecund sex increases with density. We illustrate the robustness of this pattern, using Monte Carlo simulations, against a range of body size-fecundity relationships and sex-allocation strategies. Finally, we test the model using the sex-changing polygynous reef fish Parapercis cylindrica; positive density dependence in the strength of sexual selection for male size is evidenced as the causal mechanism driving local sex ratio adjustment, hence the AE. Model application may extend to invertebrates, reptiles, birds, and mammals, in addition to over 70 reef fishes. We suggest that protected areas may often outperform harvest quotas as a conservation tool since the latter promotes population fragmentation, reduced polygyny, a balancing of the sex ratio, and hence up to a 50% decline in per capita fecundity, while the former maximizes polygyny and source-sink potential.
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Linde, Jens Jørgen
2016-01-01
, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10–20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2–3 km away.......Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling...... overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5–30 min of rain data recorded by multiple rain gauges and propagating the rainfall...
Effects of additional food in a delayed predator-prey model.
Sahoo, Banshidhar; Poria, Swarup
2015-03-01
We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.
Degree of multicollinearity and variables involved in linear dependence in additive-dominant models
Directory of Open Access Journals (Sweden)
Juliana Petrini
2012-12-01
Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.
Directory of Open Access Journals (Sweden)
João Pedro Velho
2014-09-01
Full Text Available In the present work, with whole plant silage corn at different stages of maturity, aimed to evaluate the mathematical models Exponential, France, Gompertz and Logistic to study the kinetics of gas production in vitro incubations for 24 and 48 hours. A semi-automated in vitro gas production technique was used during one, three, six, eight, ten, 12, 14, 16, 22, 24, 31, 36, 42 and 48 hours of incubation periods. Model adjustment was evaluated by means of mean square of error, mean bias, root mean square prediction error and residual error. The Gompertz mathematical model allowed the best adjustment to describe the gas production kinetics of maize silages, regardless of incubation period. The France model was not adequate to describe gas kinetics of incubation periods equal or lower than 48 hours. The in vitro gas production technique was efficient to detect differences in nutritional value of maize silages from different growth stages. Twenty four hours in vitro incubation periods do not mask treatment effects, whilst 48 hour periods are inadequate to measure silage digestibility.
Directory of Open Access Journals (Sweden)
Xiaoli Wang
Full Text Available Serfling-type periodic regression models have been widely used to identify and analyse epidemic of influenza. In these approaches, the baseline is traditionally determined using cleaned historical non-epidemic data. However, we found that the previous exclusion of epidemic seasons was empirical, since year-year variations in the seasonal pattern of activity had been ignored. Therefore, excluding fixed 'epidemic' months did not seem reasonable. We made some adjustments in the rule of epidemic-period removal to avoid potentially subjective definition of the start and end of epidemic periods. We fitted the baseline iteratively. Firstly, we established a Serfling regression model based on the actual observations without any removals. After that, instead of manually excluding a predefined 'epidemic' period (the traditional method, we excluded observations which exceeded a calculated boundary. We then established Serfling regression once more using the cleaned data and excluded observations which exceeded a calculated boundary. We repeated this process until the R2 value stopped to increase. In addition, the definitions of the onset of influenza epidemic were heterogeneous, which might make it impossible to accurately evaluate the performance of alternative approaches. We then used this modified model to detect the peak timing of influenza instead of the onset of epidemic and compared this model with traditional Serfling models using observed weekly case counts of influenza-like illness (ILIs, in terms of sensitivity, specificity and lead time. A better performance was observed. In summary, we provide an adjusted Serfling model which may have improved performance over traditional models in early warning at arrival of peak timing of influenza.
Wang, Xiaoli; Wu, Shuangsheng; MacIntyre, C Raina; Zhang, Hongbin; Shi, Weixian; Peng, Xiaomin; Duan, Wei; Yang, Peng; Zhang, Yi; Wang, Quanyi
2015-01-01
Serfling-type periodic regression models have been widely used to identify and analyse epidemic of influenza. In these approaches, the baseline is traditionally determined using cleaned historical non-epidemic data. However, we found that the previous exclusion of epidemic seasons was empirical, since year-year variations in the seasonal pattern of activity had been ignored. Therefore, excluding fixed 'epidemic' months did not seem reasonable. We made some adjustments in the rule of epidemic-period removal to avoid potentially subjective definition of the start and end of epidemic periods. We fitted the baseline iteratively. Firstly, we established a Serfling regression model based on the actual observations without any removals. After that, instead of manually excluding a predefined 'epidemic' period (the traditional method), we excluded observations which exceeded a calculated boundary. We then established Serfling regression once more using the cleaned data and excluded observations which exceeded a calculated boundary. We repeated this process until the R2 value stopped to increase. In addition, the definitions of the onset of influenza epidemic were heterogeneous, which might make it impossible to accurately evaluate the performance of alternative approaches. We then used this modified model to detect the peak timing of influenza instead of the onset of epidemic and compared this model with traditional Serfling models using observed weekly case counts of influenza-like illness (ILIs), in terms of sensitivity, specificity and lead time. A better performance was observed. In summary, we provide an adjusted Serfling model which may have improved performance over traditional models in early warning at arrival of peak timing of influenza.
ADDITIVE-MULTIPLICATIVE MODEL FOR RISK ESTIMATION IN THE PRODUCTION OF ROCKET AND SPACE TECHNICS
Directory of Open Access Journals (Sweden)
Orlov A. I.
2014-10-01
Full Text Available For the first time we have developed a general additive-multiplicative model of the risk estimation (to estimate the probabilities of risk events. In the two-level system in the lower level the risk estimates are combined additively, on the top – in a multiplicative way. Additive-multiplicative model was used for risk estimation for (1 implementation of innovative projects at universities (with external partners, (2 the production of new innovative products, (3 the projects for creation of rocket and space equipmen
Directory of Open Access Journals (Sweden)
Kadaj Roman
2016-12-01
Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional
Kadaj, Roman
2016-12-01
The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS
Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint
Energy Technology Data Exchange (ETDEWEB)
Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.
2015-04-06
Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.
Eeuwijk, van F.A.
1996-01-01
In plant breeding it is a common observation to see genotypes react differently to environmental changes. This phenomenon is called genotype by environment interaction. Many statistical approaches for analysing genotype by environment interaction rely heavily on the analysis of variance model. Genot
Eeuwijk, van F.A.
1996-01-01
In plant breeding it is a common observation to see genotypes react differently to environmental changes. This phenomenon is called genotype by environment interaction. Many statistical approaches for analysing genotype by environment interaction rely heavily on the analysis of variance model.
Rational Multi-curve Models with Counterparty-risk Valuation Adjustments
DEFF Research Database (Denmark)
Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai
2016-01-01
We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market data...
Study of Offset Collisions and Beam Adjustment in the LHC Using a Strong-Strong Simulation Model
Muratori, B
2002-01-01
The bunches of the two opposing beams in the LHC do not always collide head-on. The beam-beam effects cause a small, unavoidable separation under nominal operational conditions. During the beam adjustment and when the beams are brought into collision the beams are separated by a significant fraction of the beam size. A result of small beam separation can be the excitation of coherent dipole oscillations or an emittance increase. These two effects are studied using a strong-strong multi particle simulation model. The aim is to identify possible limitations and to find procedures which minimise possible detrimental effects.
MacNab, Ying C
2007-11-20
This paper presents a Bayesian disability-adjusted life year (DALY) methodology for spatial and spatiotemporal analyses of disease and/or injury burden. A Bayesian disease mapping model framework, which blends together spatial modelling, shared-component modelling (SCM), temporal modelling, ecological modelling, and non-linear modelling, is developed for small-area DALY estimation and inference. In particular, we develop a model framework that enables SCM as well as multivariate CAR modelling of non-fatal and fatal disease or injury rates and facilitates spline smoothing for non-linear modelling of temporal rate and risk trends. Using British Columbia (Canada) hospital admission-separation data and vital statistics mortality data on non-fatal and fatal road traffic injuries to male population age 20-39 for year 1991-2000 and for 84 local health areas and 16 health service delivery areas, spatial and spatiotemporal estimation and inference on years of life lost due to premature death, years lived with disability, and DALYs are presented. Fully Bayesian estimation and inference, with Markov chain Monte Carlo implementation, are illustrated. We present a methodological framework within which the DALY and the Bayesian disease mapping methodologies interface and intersect. Its development brings the relative importance of premature mortality and disability into the assessment of community health and health needs in order to provide reliable information and evidence for community-based public health surveillance and evaluation, disease and injury prevention, and resource provision.
An estimating equation for parametric shared frailty models with marginal additive hazards
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Martinussen, Torben
2004-01-01
Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...... to use the marginal additive hazards model. We derive asymptotic properties of the Lin and Ying estimators for the marginal additive hazards model for multivariate failure time data. Furthermore we suggest estimating equations for the regression parameters and association parameters in parametric shared...
An original traffic additional emission model and numerical simulation on a signalized road
Zhu, Wen-Xing; Zhang, Jing-Yu
2017-02-01
Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.
Linear identification and model adjustment of a PEM fuel cell stack
Energy Technology Data Exchange (ETDEWEB)
Kunusch, C.; Puleston, P.F.; More, J.J. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A. [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M.A. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)
2008-07-15
In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)
Javaheri, Amir; Babbar-Sebens, Meghna; Miller, Robert N.
2016-06-01
Data Assimilation (DA) has been proposed for multiple water resources studies that require rapid employment of incoming observations to update and improve accuracy of operational prediction models. The usefulness of DA approaches in assimilating water temperature observations from different types of monitoring technologies (e.g., remote sensing and in-situ sensors) into numerical models of in-land water bodies (e.g., lakes and reservoirs) has, however, received limited attention. In contrast to in-situ temperature sensors, remote sensing technologies (e.g., satellites) provide the benefit of collecting measurements with better X-Y spatial coverage. However, assimilating water temperature measurements from satellites can introduce biases in the updated numerical model of water bodies because the physical region represented by these measurements do not directly correspond with the numerical model's representation of the water column. This study proposes a novel approach to address this representation challenge by coupling a skin temperature adjustment technique based on available air and in-situ water temperature observations, with an ensemble Kalman filter based data assimilation technique. Additionally, the proposed approach used in this study for four-dimensional analysis of a reservoir provides reasonably accurate surface layer and water column temperature forecasts, in spite of the use of a fairly small ensemble. Application of the methodology on a test site - Eagle Creek Reservoir - in Central Indiana demonstrated that assimilation of remotely sensed skin temperature data using the proposed approach improved the overall root mean square difference between modeled surface layer temperatures and the adjusted remotely sensed skin temperature observations from 5.6°C to 0.51°C (i.e., 91% improvement). In addition, the overall error in the water column temperature predictions when compared with in-situ observations also decreased from 1.95°C (before assimilation
The timing of the Black Sea flood event: Insights from modeling of glacial isostatic adjustment
Goldberg, Samuel L.; Lau, Harriet C. P.; Mitrovica, Jerry X.; Latychev, Konstantin
2016-10-01
We present a suite of gravitationally self-consistent predictions of sea-level change since Last Glacial Maximum (LGM) in the vicinity of the Bosphorus and Dardanelles straits that combine signals associated with glacial isostatic adjustment (GIA) and the flooding of the Black Sea. Our predictions are tuned to fit a relative sea level (RSL) record at the island of Samothrace in the north Aegean Sea and they include realistic 3-D variations in viscoelastic structure, including lateral variations in mantle viscosity and the elastic thickness of the lithosphere, as well as weak plate boundary zones. We demonstrate that 3-D Earth structure and the magnitude of the flood event (which depends on the pre-flood level of the lake) both have significant impact on the predicted RSL change at the location of the Bosphorus sill, and therefore on the inferred timing of the marine incursion. We summarize our results in a plot showing the predicted RSL change at the Bosphorus sill as a function of the timing of the flood event for different flood magnitudes up to 100 m. These results suggest, for example, that a flood event at 9 ka implies that the elevation of the sill was lowered through erosion by ∼14-21 m during, and after, the flood. In contrast, a flood event at 7 ka suggests erosion of ∼24-31 m at the sill since the flood. More generally, our results will be useful for future research aimed at constraining the details of this controversial, and widely debated geological event.
Real time adjustment of slow changing flow components in distributed urban runoff models
DEFF Research Database (Denmark)
Borup, Morten; Grum, M.; Mikkelsen, Peter Steen
2011-01-01
. This information is then used to update the states of the hydrological model. The method is demonstrated on the 20 km2 Danish urban catchment of Ballerup, which has substantial amount of infiltration inflow after succeeding rain events, for a very rainy period of 17 days in August 2010. The results show big......In many urban runoff systems infiltrating water contributes with a substantial part of the total inflow and therefore most urban runoff modelling packages include hydrological models for simulating the infiltrating inflow. This paper presents a method for deterministic updating of the hydrological...
Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu
2017-04-01
Projections of future climate change have been increasingly called for lately, as the reality of climate change has been gradually accepted and societies and governments have started to plan upcoming mitigation and adaptation policies. In mountain regions such as the Alps or the Pyrenees, where winter tourism and hydropower production are large contributors to the regional revenue, particular attention is brought to current and future snow availability. The question of the vulnerability of mountain ecosystems as well as the occurrence of climate-related hazards such as avalanches and debris-flows is also under consideration. In order to generate projections of snow conditions, however, downscaling global climate models (GCMs) by using regional climate models (RCMs) is not sufficient to capture the fine-scale processes and thresholds at play. In particular, the altitudinal resolution matters, since the phase of precipitation is mainly controlled by the temperature which is altitude-dependent. Simulations from GCMs and RCMs moreover suffer from biases compared to local observations, due to their rather coarse spatial and altitudinal resolution, and often provide outputs at too coarse time resolution to drive impact models. RCM simulations must therefore be adjusted using empirical-statistical downscaling and error correction methods, before they can be used to drive specific models such as energy balance land surface models. In this study, time series of hourly temperature, precipitation, wind speed, humidity, and short- and longwave radiation were generated over the Pyrenees and the French Alps for the period 1950-2100, by using a new approach (named ADAMONT for ADjustment of RCM outputs to MOuNTain regions) based on quantile mapping applied to daily data, followed by time disaggregation accounting for weather patterns selection. We first introduce a thorough evaluation of the method using using model runs from the ALADIN RCM driven by a global reanalysis over the
Directory of Open Access Journals (Sweden)
Natalya Pya
2016-02-01
Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Model-based Adjustment of Droplet Characteristic for 3D Electronic Printing
Directory of Open Access Journals (Sweden)
Lin Na
2017-01-01
Full Text Available The major challenge in 3D electronic printing is the print resolution and accuracy. In this paper, a typical mode - lumped element modeling method (LEM - is adopted to simulate the droplet jetting characteristic. This modeling method can quickly get the droplet velocity and volume with a high accuracy. Experimental results show that LEM has a simpler structure with the sufficient simulation and prediction accuracy.
Directory of Open Access Journals (Sweden)
Miguel Angel Luque-Fernandez
2016-10-01
Full Text Available Abstract Background In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean. Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. Methods We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. Results All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001. However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3 for non-flexible piecewise exponential models. Conclusion We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts
Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.
2016-01-01
Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ∼ 2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.
Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts
Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.
2016-07-01
Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.
Adjustment and Characterization of an Original Model of Chronic Ischemic Heart Failure in Pig
Directory of Open Access Journals (Sweden)
Laurent Barandon
2010-01-01
Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.
Structured Additive Regression Models: An R Interface to BayesX
Directory of Open Access Journals (Sweden)
Nikolaus Umlauf
2015-02-01
Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.
Terpstra, Teun; Lindell, Michael K.
2013-01-01
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
Adjustment of Homeless Adolescents to a Crisis Shelter: Application of a Stress and Coping Model.
Dalton, Melanie M.; Pakenham, Kenneth I.
2002-01-01
Examined the usefulness of a stress and coping model of adaptation to a homeless shelter among 78 homeless adolescents who were interviewed and completed measures at shelter entrance and discharge. After controlling for relevant background variables, measures of coping resources, appraisal, and coping strategies showed relations with measures of…
Glacial isostatic adjustment associated with the Barents Sea ice sheet: A modelling inter-comparison
Auriac, A.; Whitehouse, P. L.; Bentley, M. J.; Patton, H.; Lloyd, J. M.; Hubbard, A.
2016-09-01
The 3D geometrical evolution of the Barents Sea Ice Sheet (BSIS), particularly during its late-glacial retreat phase, remains largely ambiguous due to the paucity of direct marine- and terrestrial-based evidence constraining its horizontal and vertical extent and chronology. One way of validating the numerous BSIS reconstructions previously proposed is to collate and apply them under a wide range of Earth models and to compare prognostic (isostatic) output through time with known relative sea-level (RSL) data. Here we compare six contrasting BSIS load scenarios via a spherical Earth system model and derive a best-fit, χ2 parameter using RSL data from the four main terrestrial regions within the domain: Svalbard, Franz Josef Land, Novaya Zemlya and northern Norway. Poor χ2 values allow two load scenarios to be dismissed, leaving four that agree well with RSL observations. The remaining four scenarios optimally fit the RSL data when combined with Earth models that have an upper mantle viscosity of 0.2-2 × 1021 Pa s, while there is less sensitivity to the lithosphere thickness (ranging from 71 to 120 km) and lower mantle viscosity (spanning 1-50 × 1021 Pa s). GPS observations are also compared with predictions of present-day uplift across the Barents Sea. Key locations where relative sea-level and GPS data would prove critical in constraining future ice-sheet modelling efforts are also identified.
Terpstra, Teun; Lindell, Michael K.
2013-01-01
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
An improved canopy wind model for predicting wind adjustment factors and wildland fire behavior
W. J. Massman; J. M. Forthofer; M. A. Finney
2017-01-01
The ability to rapidly estimate wind speed beneath a forest canopy or near the ground surface in any vegetation is critical to practical wildland fire behavior models. The common metric of this wind speed is the "mid-flame" wind speed, UMF. However, the existing approach for estimating UMF has some significant shortcomings. These include the assumptions that...
The linear quadratic adjustment cost model and the demand for labour
DEFF Research Database (Denmark)
Engsted, Tom; Haldrup, Niels
1994-01-01
Der udvikles en ny metode til estimation og test af den lineære kvadratiske tilpasningsomkostningsmodel når de underliggende tidsserier er ikke-stationære, og metoden anvendes til modellering af arbejdskraftefterspørgslen i danske industrisektorer....
A guide to generalized additive models in crop science using SAS and R
Directory of Open Access Journals (Sweden)
Josefine Liew
2015-06-01
Full Text Available Linear models and generalized linear models are well known and are used extensively in crop science. Generalized additive models (GAMs are less well known. GAMs extend generalized linear models through inclusion of smoothing functions of explanatory variables, e.g., spline functions, allowing the curves to bend to better describe the observed data. This article provides an introduction to GAMs in the context of crop science experiments. This is exemplified using a dataset consisting of four populations of perennial sow-thistle (Sonchus arvensis L., originating from two regions, for which emergence of shoots over time was compared.
Modelling and control of Base Plate Loading subsystem for The Motorized Adjustable Vertical Platform
Norsahperi, N. M. H.; Ahmad, S.; Fuad, A. F. M.; Mahmood, I. A.; Toha, S. F.; Akmeliawati, R.; Darsivan, F. J.
2017-03-01
Malaysia National Space Agency, ANGKASA is an organization that intensively undergoes many researches especially on space. On 2011, ANGKASA had built Satellite Assembly, Integration and Test Centre (AITC) for spacecraft development and test. Satellite will undergo numerous tests and one of it is Thermal test in Thermal Vacuum Chamber (TVC). In fact, TVC is located in cleanroom and on a platform. The only available facilities for loading and unloading the satellite is overhead crane. By utilizing the overhead crane can jeopardize the safety of the satellite. Therefore, Motorized vertical platform (MAVeP) for transferring the satellite into the TVC with capability to operate under cleanroom condition and limited space is proposed to facilitate the test. MAVeP is the combination of several mechanisms to produce horizontal and vertical motions with the ability to transfer the satellite from loading bay into TVC. The integration of both motions to elevate and transfer heavy loads with high precision capability will deliver major contributions in various industries such as aerospace and automotive. Base plate subsystem is capable to translate the horizontal motion by converting the angular motion from motor to linear motion by using rack and pinion mechanism. Generally a system can be modelled by performing physical modelling from schematic diagram or through system identification techniques. Both techniques are time consuming and required comprehensive understanding about the system, which may expose to error prone especially for complex mechanism. Therefore, a 3D virtual modelling technique has been implemented to represent the system in real world environment i.e. gravity to simulate control performance. The main purpose of this technique is to provide better model to analyse the system performance and capable to evaluate the dynamic behaviour of the system with visualization of the system performance, where a 3D prototype was designed and assembled in Solidworks
Dryginin, N. V.; Krasnoveikin, V. A.; Filippov, A. V.; Tarasov, S. Yu.; Rubtsov, V. E.
2016-11-01
Additive manufacturing by 3D printing is the most advanced and promising trend for making the multicomponent composites. Polymer-based carbon fiber reinforced composites demonstrate high mechanical properties combined with low weight characteristics of the component. This paper shows the results of 3D modeling and experimental modal analysis on a polymer composite framework obtained using additive manufacturing. By the example of three oscillation modes it was shown the agreement between the results of modeling and experimental modal analysis with the use of laser Doppler vibrometry.
Institute of Scientific and Technical Information of China (English)
Huan-bin Liu; Liu-quan Sun; Li-xing Zhu
2005-01-01
Many survival studies record the times to two or more distinct failures on each subject. The failures may be events of different natures or may be repetitions of the same kind of event. In this article, we consider the regression analysis of such multivariate failure time data under the additive hazards model. Simple weighted estimating functions for the regression parameters are proposed, and asymptotic distribution theory of the resulting estimators are derived. In addition, a class of generalized Wald and generalized score statistics for hypothesis testing and model selection are presented, and the asymptotic properties of these statistics are examined.
Discriminative accuracy of genomic profiling comparing multiplicative and additive risk models.
Moonesinghe, Ramal; Khoury, Muin J; Liu, Tiebin; Janssens, A Cecile J W
2011-02-01
Genetic prediction of common diseases is based on testing multiple genetic variants with weak effect sizes. Standard logistic regression and Cox Proportional Hazard models that assess the combined effect of multiple variants on disease risk assume multiplicative joint effects of the variants, but this assumption may not be correct. The risk model chosen may affect the predictive accuracy of genomic profiling. We investigated the discriminative accuracy of genomic profiling by comparing additive and multiplicative risk models. We examined genomic profiles of 40 variants with genotype frequencies varying from 0.1 to 0.4 and relative risks varying from 1.1 to 1.5 in separate scenarios assuming a disease risk of 10%. The discriminative accuracy was evaluated by the area under the receiver operating characteristic curve. Predicted risks were more extreme at the lower and higher risks for the multiplicative risk model compared with the additive model. The discriminative accuracy was consistently higher for multiplicative risk models than for additive risk models. The differences in discriminative accuracy were negligible when the effect sizes were small (risk genotypes were common or when they had stronger effects. Unraveling the exact mode of biological interaction is important when effect sizes of genetic variants are moderate at the least, to prevent the incorrect estimation of risks.
Pankoke, S.; Buck, B.; Woelfel, H. P.
1998-08-01
Long-term whole-body vibrations can cause degeneration of the lumbar spine. Therefore existing degeneration has to be assessed as well as industrial working places to prevent further damage. Hence, the mechanical stress in the lumbar spine—especially in the three lower vertebrae—has to be known. This stress can be expressed as internal forces. These internal forces cannot be evaluated experimentally, because force transducers cannot be implementated in the force lines because of ethical reasons. Thus it is necessary to calculate the internal forces with a dynamic mathematical model of sitting man.A two dimensional dynamic Finite Element model of sitting man is presented which allows calculation of these unknown internal forces. The model is based on an anatomic representation of the lower lumbar spine (L3-L5). This lumber spine model is incorporated into a dynamic model of the upper torso with neck, head and arms as well as a model of the body caudal to the lumbar spine with pelvis and legs. Additionally a simple dynamic representation of the viscera is used. All these parts are modelled as rigid bodies connected by linear stiffnesses. Energy dissipation is modelled by assigning modal damping ratio to the calculated undamped eigenvalues. Geometry and inertial properties of the model are determined according to human anatomy. Stiffnesses of the spine model are derived from static in-vitro experiments in references [1] and [2]. Remaining stiffness parameters and parameters for energy dissipation are determined by using parameter identification to fit measurements in reference [3]. The model, which is available in 3 different postures, allows one to adjust its parameters for body height and body mass to the values of the person for which internal forces have to be calculated.
Pokhilko, Alexandra; Flis, Anna; Sulpice, Ronan; Stitt, Mark; Ebenhöh, Oliver
2014-03-04
In the light, photosynthesis provides carbon for metabolism and growth. In the dark, plant growth depends on carbon reserves that were accumulated during previous light periods. Many plants accumulate part of their newly-fixed carbon as starch in their leaves in the day and remobilise it to support metabolism and growth at night. The daily rhythms of starch accumulation and degradation are dynamically adjusted to the changing light conditions such that starch is almost but not totally exhausted at dawn. This requires the allocation of a larger proportion of the newly fixed carbon to starch under low carbon conditions, and the use of information about the carbon status at the end of the light period and the length of the night to pace the rate of starch degradation. This regulation occurs in a circadian clock-dependent manner, through unknown mechanisms. We use mathematical modelling to explore possible diurnal mechanisms regulating the starch level. Our model combines the main reactions of carbon fixation, starch and sucrose synthesis, starch degradation and consumption of carbon by sink tissues. To describe the dynamic adjustment of starch to daily conditions, we introduce diurnal regulators of carbon fluxes, which modulate the activities of the key steps of starch metabolism. The sensing of the diurnal conditions is mediated in our model by the timer α and the "dark sensor"β, which integrate daily information about the light conditions and time of the day through the circadian clock. Our data identify the β subunit of SnRK1 kinase as a good candidate for the role of the dark-accumulated component β of our model. The developed novel approach for understanding starch kinetics through diurnal metabolic and circadian sensors allowed us to explain starch time-courses in plants and predict the kinetics of the proposed diurnal regulators under various genetic and environmental perturbations.
Ammar, Sami; Pernaudat, Guillaume; Trépanier, Jean-Yves
2017-08-01
The interdependence of surface tension and density ratio is a weakness of pseudo-potential based lattice Boltzmann models (LB). In this paper, we propose a 3D multi-relaxation time (MRT) model for multiphase flows at large density ratios. The proposed model is capable of adjusting the surface tension independently of the density ratio. We also present the 3D macroscopic equations recovered by the proposed forcing scheme. A high order of isotropy for the interaction force is used to reduce the amplitude of spurious currents. The proposed 3D-MRT model is validated by verifying Laplace's law and by analyzing its thermodynamic consistency and the oscillation period of a deformed droplet. The model is then applied to the simulation of the impact of a droplet on a dry surface. Impact dynamics are determined and the maximum spread factor calculated for different Reynolds and Weber numbers. The numerical results are in agreement with data published in the literature. The influence of surface wettability on the spread factor is also investigated. Finally, our 3D-MRT model is applied to the simulation of the impact of a droplet on a wet surface. The propagation of transverse waves is observed on the liquid surface.
A data-driven model for constraint of present-day glacial isostatic adjustment in North America
Simon, K. M.; Riva, R. E. M.; Kleinherenbrink, M.; Tangdamrongsub, N.
2017-09-01
Geodetic measurements of vertical land motion and gravity change are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) in North America via least-squares adjustment. The result is an updated GIA model wherein the final predicted signal is informed by both observational data, and prior knowledge (or intuition) of GIA inferred from models. The data-driven method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. In order to assess the influence each dataset has on the final GIA prediction, the vertical land motion and GRACE-measured gravity data are incorporated into the model first independently (i.e., one dataset only), then simultaneously. The relative weighting of the datasets and the prior input is iteratively determined by variance component estimation in order to achieve the most statistically appropriate fit to the data. The best-fit model is obtained when both datasets are inverted and gives respective RMS misfits to the GPS and GRACE data of 1.3 mm/yr and 0.8 mm/yr equivalent water layer change. Non-GIA signals (e.g., hydrology) are removed from the datasets prior to inversion. The post-fit residuals between the model predictions and the vertical motion and gravity datasets, however, suggest particular regions where significant non-GIA signals may still be present in the data, including unmodeled hydrological changes in the central Prairies west of Lake Winnipeg. Outside of these regions of misfit, the posterior uncertainty of the predicted model provides a measure of the formal uncertainty associated with the GIA process; results indicate that this quantity is sensitive to the uncertainty and spatial distribution of the input data as well as that of the prior model information. In the study area, the predicted uncertainty of the present-day GIA signal ranges from ∼0.2-1.2 mm/yr for rates of vertical land motion, and
Directory of Open Access Journals (Sweden)
Montri Srirajlao
2010-01-01
Full Text Available Problem statement: Para Rubber was an economic wood growing in Northeast Thailand playing economic and social role. The objectives of this research were to study: (1 the economic, social and cultural lifestyle and (2 the appropriate adjustment model of agriculturists or farmers growing Para Rubber in Northeast Thailand. Approach: The research area covered 6 provinces: Mahasarakam, Roi-ed, Khon Kaen, Nongkai, Udontani and Loei. The samples were selected by Purposive Sampling including: 90 experts, 60 practitioners and 60 general people. The instruments using for collecting data were: (1 The Interview Form, (2 The Observation Form, (3 Focus Group Discussion and (4 Workshop, investigated by Triangulation. Data were analyzed according to the specified objectives and presented in descriptive analysis. Results: The farmers' lifestyle in traditional period of Northeast Thailand was to earn their living from producing by themselves and sharing resources with each other including: rice farming, farm rice growing, vegetable garden growing, searching for natural food without cost wasting one's capital. When it was period of changing, the price of traditional industrial crop was lowered, the agriculturists began to grow Para Rubber instead since the promotion of governmental industrial section. For the economic, social and cultural changes, found that the agriculturists growing Para Rubber Plantation, had more revenue. But, the mechanism of market price and selling had stability was attached with political situation. For the pattern of adjustment of the agriculturists growing Para Rubber Plantation in Northeast Thailand, found that there was an adjustment in individual level for developing their self study by applying body of knowledge learned by experience of successful people by being employed in cutting Para Rubber in The Southern of Thailand as well as the academic support and selling to serve the need of farmers. Conclusion/Recommendations: Para Rubber
Energy Technology Data Exchange (ETDEWEB)
Doligez, B.; Eschard, R. [Institut Francais du Petrole, Rueil Malmaison (France); Geffroy, F. [Centre de Geostatistique, Fontainebleau (France)] [and others
1997-08-01
The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.
Directory of Open Access Journals (Sweden)
Talerngsak Angkuraseranee
2010-05-01
Full Text Available The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA, birth weight (BW, number weaned (NW, andweaning weight (WW. Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but included the dominance genotypic effect. Heritability estimates of NBA, BW, NWand WW from the two models were 0.1558/0.1716, 0.1616/0.1737, 0.0372/0.0874 and 0.1584/0.1516 respectively. Proportionsof dominance effect to total phenotypic variance from the dominance model were 0.1024, 0.1625, 0.0470, and 0.1536 for NBA,BW, NW and WW respectively. Dominance effects were found to have sizable influence on the litter size traits analyzed.Therefore, genetic evaluation with the dominance model (Model 2 is found more appropriate than the animal model (Model 1.
HR Department
2008-01-01
In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566
HR Department
2008-01-01
In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566
New results in RR Lyrae modeling: convective cycles, additional modes and more
Molnár, L; Szabó, R; Plachy, E
2012-01-01
Recent theoretical and observational findings breathed new life into the field of RR Lyrae stars. The ever more precise and complete measurements of the space asteroseismology missions revealed new details, such as the period doubling and the presence of the additional modes in the stars. Theoretical work also flourished: period doubling was explained and an additional mode has been detected in hydrodynamic models as well. Although the most intriguing mystery, the Blazhko-effect has remained unsolved, new findings indicate that the convective cycle model can be effectively ruled out for short- and medium-period modulations. On the other hand, the plausibility of the radial resonance model is increasing, as more and more resonances are detected both in models and stars.
[download] (1035Coordinate Descent Methods for the Penalized Semiparametric Additive Hazards Model
Directory of Open Access Journals (Sweden)
Anders Gorst-Rasmussen
2012-04-01
Full Text Available For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires no nonlinear optimization steps and offers excellent performance and stability. An implementation is available in the R package ahaz. We demonstrate this implementation in a small timing study and in an application to real data.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
Fan, Jianqing; Song, Rui
2011-01-01
A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data a...
A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics
Directory of Open Access Journals (Sweden)
B. Islam
2008-06-01
Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.
Models of Quality-Adjusted Life Years when Health varies over Time: Survey and Analysis
DEFF Research Database (Denmark)
Hansen, Kristian Schultz; Østerdal, Lars Peter
2006-01-01
time trade-off (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. This discussion includes questioning the SG method as the gold standard of the health state index, re-examining the role...... of the constant-proportional trade-off condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken...
Models of quality-adjusted life years when health varies over time
DEFF Research Database (Denmark)
Hansen, Kristian Schultz; Østerdal, Lars Peter Raahave
2006-01-01
time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... the SG method as the gold standard for estimation of the health state index, reexamining the role of the constantproportional tradeoff condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination...
Hambidge, K Michael; Miller, Leland V.; Westcott, Jamie E.; Krebs, Nancy F
2008-01-01
The quantity of total dietary zinc (Zn) and phytate are the principal determinants of the quantity of absorbed Zn. Recent estimates of Dietary Reference Intakes (DRI) for Zn by the Institute of Medicine (IOM) were based on data from low-phytate or phytate-free diets. The objective of this project was to estimate the effects of increasing quantities of dietary phytate on these DRI. We used a trivariate model of the quantity of Zn absorbed as a function of dietary Zn and phytate with updated pa...
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study the...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients...
Devising a model brand loyalty in tires industry: the adjustment role of customer perceived value
Directory of Open Access Journals (Sweden)
Davoud Feiz
2015-06-01
Full Text Available Today, brand discussion is highly considered by companies and market agents. Different factors such as customers’ loyalty to brand impact on brand and the increase in sale and profit. Present paper aims at studying the impact of brand experience, trust and satisfaction on brand loyalty to Barez Tire Company in the city of Kerman as well as providing a model for this case. Research population consists of all Barez Tire consumers in Kerman. The volume of the sample was 171 for which simple random sampling was used. Data collection tool was a standard questionnaire and for measuring its reliability, Chronbach’s alpha was used. Present research is an applied one in terms of purpose and it is a descriptive and correlative one in terms of acquiring needed data. To analyze data, confirmatory factor analysis (CFA and structural equation model (SEM in SPSS and LISREL software were used. The findings indicate that brand experience, brand trust, and brand satisfaction impact on brand loyalty to Barez Tire Brand in the city of Kerman significantly. Noteworthy, the impact of these factors is higher when considering the role of the perceived value moderator.
Zhang, Xiaolong; Li, Liang; Pan, Deng; Cao, Chengmao; Song, Jian
2014-03-01
The current research of real-time observation for vehicle roll steer angle and compliance steer angle(both of them comprehensively referred as the additional steer angle in this paper) mainly employs the linear vehicle dynamic model, in which only the lateral acceleration of vehicle body is considered. The observation accuracy resorting to this method cannot meet the requirements of vehicle real-time stability control, especially under extreme driving conditions. The paper explores the solution resorting to experimental method. Firstly, a multi-body dynamic model of a passenger car is built based on the ADAMS/Car software, whose dynamic accuracy is verified by the same vehicle's roadway test data of steady static circular test. Based on this simulation platform, several influencing factors of additional steer angle under different driving conditions are quantitatively analyzed. Then ɛ-SVR algorithm is employed to build the additional steer angle prediction model, whose input vectors mainly include the sensor information of standard electronic stability control system(ESC). The method of typical slalom tests and FMVSS 126 tests are adopted to make simulation, train model and test model's generalization performance. The test result shows that the influence of lateral acceleration on additional steer angle is maximal (the magnitude up to 1°), followed by the longitudinal acceleration-deceleration and the road wave amplitude (the magnitude up to 0.3°). Moreover, both the prediction accuracy and the calculation real-time of the model can meet the control requirements of ESC. This research expands the accurate observation methods of the additional steer angle under extreme driving conditions.
Changing dynamics: Time-varying autoregressive models using generalized additive modeling.
Bringmann, Laura F; Hamaker, Ellen L; Vigo, Daniel E; Aubert, André; Borsboom, Denny; Tuerlinckx, Francis
2017-09-01
In psychology, the use of intensive longitudinal data has steeply increased during the past decade. As a result, studying temporal dependencies in such data with autoregressive modeling is becoming common practice. However, standard autoregressive models are often suboptimal as they assume that parameters are time-invariant. This is problematic if changing dynamics (e.g., changes in the temporal dependency of a process) govern the time series. Often a change in the process, such as emotional well-being during therapy, is the very reason why it is interesting and important to study psychological dynamics. As a result, there is a need for an easily applicable method for studying such nonstationary processes that result from changing dynamics. In this article we present such a tool: the semiparametric TV-AR model. We show with a simulation study and an empirical application that the TV-AR model can approximate nonstationary processes well if there are at least 100 time points available and no unknown abrupt changes in the data. Notably, no prior knowledge of the processes that drive change in the dynamic structure is necessary. We conclude that the TV-AR model has significant potential for studying changing dynamics in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Vector generalized linear and additive models with an implementation in R
Yee, Thomas W
2015-01-01
This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable. The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...
Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto
2016-11-01
The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.
An ice flow modeling perspective on bedrock adjustment patterns of the Greenland ice sheet
Directory of Open Access Journals (Sweden)
M. Olaizola
2012-11-01
Full Text Available Since the launch in 2002 of the Gravity Recovery and Climate Experiment (GRACE satellites, several estimates of the mass balance of the Greenland ice sheet (GrIS have been produced. To obtain ice mass changes, the GRACE data need to be corrected for the effect of deformation changes of the Earth's crust. Recently, a new method has been proposed where ice mass changes and bedrock changes are simultaneously solved. Results show bedrock subsidence over almost the entirety of Greenland in combination with ice mass loss which is only half of the currently standing estimates. This subsidence can be an elastic response, but it may however also be a delayed response to past changes. In this study we test whether these subsidence patterns are consistent with ice dynamical modeling results. We use a 3-D ice sheet–bedrock model with a surface mass balance forcing based on a mass balance gradient approach to study the pattern and magnitude of bedrock changes in Greenland. Different mass balance forcings are used. Simulations since the Last Glacial Maximum yield a bedrock delay with respect to the mass balance forcing of nearly 3000 yr and an average uplift at present of 0.3 mm yr^{−1}. The spatial pattern of bedrock changes shows a small central subsidence as well as more intense uplift in the south. These results are not compatible with the gravity based reconstructions showing a subsidence with a maximum in central Greenland, thereby questioning whether the claim of halving of the ice mass change is justified.
Adjustment of mathematical models and quality of soybean grains in the drying with high temperatures
Directory of Open Access Journals (Sweden)
Paulo C. Coradi
2016-04-01
Full Text Available ABSTRACT The aim of this study was to evaluate the influence of the initial moisture content of soybeans and the drying air temperatures on drying kinetics and grain quality, and find the best mathematical model that fit the experimental data of drying, effective diffusivity and isosteric heat of desorption. The experimental design was completely randomized (CRD, with a factorial scheme (4 x 2, four drying temperatures (75, 90, 105 and 120 ºC and two initial moisture contents (25 and 19% d.b., with three replicates. The initial moisture content of the product interferes with the drying time. The model of Wang and Singh proved to be more suitable to describe the drying of soybeans to temperature ranges of the drying air of 75, 90, 105 and 120 °C and initial moisture contents of 19 and 25% (d.b.. The effective diffusivity obtained from the drying of soybeans was higher (2.5 x 10-11 m2 s-1 for a temperature of 120 °C and water content of 25% (d.b.. Drying of soybeans at higher temperatures (above 105 °C and higher initial water content (25% d.b. also increases the amount of energy (3894.57 kJ kg-1, i.e., the isosteric heat of desorption necessary to perform the process. Drying air temperature and different initial moisture contents affected the quality of soybean along the drying time (electrical conductivity of 540.35 µS cm-1g-1; however, not affect the final yield of the oil extracted from soybean grains (15.69%.
Fast cloud adjustment to increasing CO2 in a superparameterized climate model
Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.; Khairoutdinov, Marat
2012-05-01
Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N) cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values. The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL) over the cooler subtropical oceans. One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo. Two-dimensional cloud-resolving model (CRM) simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Modeling the use of sulfate additives for potassium chloride destruction in biomass combustion
DEFF Research Database (Denmark)
Wu, Hao; Grell, Morten Nedergaard; Jespersen, Jacob Boll;
2013-01-01
was affected by the decomposition temperature. Based on the experimental data, a model was proposed to simulate the sulfation of KCl by different sulfate addition, and the simulation results were compared with pilot-scale experiments conducted in a bubbling fluidized bed reactor. The simulation results...
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2013-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Midrapidity inclusive densities in high energy pp collisions in additive quark model
Shabelski, Yu. M.; Shuvaev, A. G.
2016-08-01
High energy (CERN SPS and LHC) inelastic pp (pbar{p}) scattering is treated in the framework of the additive quark model together with Pomeron exchange theory. We extract the midrapidity inclusive density of the charged secondaries produced in a single quark-quark collision and investigate its energy dependence. Predictions for the π p collisions are presented.
Additional interfacial force in lattice Boltzmann models for incompressible multiphase flows
Li, Q; Gao, Y J
2011-01-01
The existing lattice Boltzmann models for incompressible multiphase flows are mostly constructed with two distribution functions, one is the order parameter distribution function, which is used to track the interface between different phases, and the other is the pressure distribution function for solving the velocity field. In this brief report, it is shown that in these models the recovered momentum equation is inconsistent with the target one: an additional interfacial force is included in the recovered momentum equation. The effects of the additional force are investigated by numerical simulations of droplet splashing on a thin liquid film and falling droplet under gravity. In the former test, it is found that the formation and evolution of secondary droplets are greatly affected, while in the latter the additional force is found to increase the falling velocity and limit the stretch of the droplet.
Montandon, L. M.; Small, E.
2008-12-01
The green vegetation fraction (Fg) is an important climate and hydrologic model parameter. The commonly- used Fg model is a simple linear mixing of two NDVI end-members: bare soil NDVI (NDVIo) and full vegetation NDVI (NDVI∞). NDVI∞ is generally set as a percentile of the historical maximum NDVI for each land cover. This approach works well for areas where Fg reaches full cover (100%). Because many biomes do not reach Fg=0, however, NDVIo is often determined as a single invariant value for all land cover types. In general, it is selected among the lowest NDVI observed over bare or desert areas, yielding NDVIo close to zero. There are two issues with this approach: large-scale variability of soil NDVI is ignored and observations on a wide range of soils show that soil NDVI is often larger. Here we introduce and test two new approaches to compute Fg that takes into account the spatial variability of soil NDVI. The first approach uses a global soil NDVI database and time series of MODIS NDVI data over the conterminous United States to constrain possible soil NDVI values over each pixel. Fg is computed using a subset of the soils database that respects the linear mixing model condition NDVIo≤NDVIh, where NDVIh is the pixel historical minimum. The second approach uses an empirical soil NDVI model that combines information of soil organic matter content and texture to infer soil NDVI. The U.S. General Soil Map (STATSGO2) database is used as input for spatial soil properties. Using in situ measurements of soil NDVI from sites that span a range of land cover types, we test both models and compare their performance to the standard Fg model. We show that our models adjust the temporal Fg estimates by 40-90% depending on the land cover type and amplitude of the seasonal NDVI signal. Using MODIS NDVI and soil maps over the conterminous U.S., we also study the spatial distribution of Fg adjustments in February and June 2008. We show that the standard Fg method
Modeling Longitudinal Data with Generalized Additive Models: Applications to Single-Case Designs
Sullivan, Kristynn J.; Shadish, William R.
2013-01-01
Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. For a variety of reasons, interest in the statistical analysis and meta-analysis of these designs has been growing in recent years. This paper proposes modeling SCD data with…
Song, Hyun Jin; Lee, Jun Ah; Han, Euna; Lee, Eui-Kyung
2015-09-01
The mortality and progression rates in osteosarcoma differ depending on the presence of metastasis. A decision model would be useful for estimating long-term effectiveness of treatment with limited clinical trial data. The aim of this study was to explore the lifetime effectiveness of the addition of mifamurtide to chemotherapy for patients with metastatic and nonmetastatic osteosarcoma. The target population was osteosarcoma patients with or without metastasis. A Markov process model was used, whose time horizon was lifetime with a starting age of 13 years. There were five health states: disease-free (DF), recurrence, post-recurrence disease-free, post-recurrence disease-progression, and death. Transition probabilities of the starting state, DF, were calculated from the INT-0133 clinical trials for chemotherapy with and without mifamurtide. Quality-adjusted life-years (QALY) increased upon addition of mifamurtide to chemotherapy by 10.5 % (10.13 and 9.17 QALY with and without mifamurtide, respectively) and 45.2 % (7.23 and 4.98 QALY with and without mifamurtide, respectively) relative to the lifetime effectiveness of chemotherapy in nonmetastatic and metastatic osteosarcoma, respectively. Life-years gained (LYG) increased by 10.1 % (13.10 LYG with mifamurtide and 11.90 LYG without mifamurtide) in nonmetastatic patients and 42.2 % (9.43 LYG with mifamurtide and 6.63 LYG without mifamurtide) in metastatic osteosarcoma patients. The Markov model analysis showed that chemotherapy with mifamurtide improved the lifetime effectiveness compared to chemotherapy alone in both nonmetastatic and metastatic osteosarcoma. Relative effectiveness of the therapy was higher in metastatic than nonmetastatic osteosarcoma over lifetime. However, absolute lifetime effectiveness was higher in nonmetastatic than metastatic osteosarcoma.
Fast Cloud Adjustment to Increasing CO2 in a Superparameterized Climate Model
Directory of Open Access Journals (Sweden)
Marat Khairoutdinov
2012-05-01
Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.
Formation and reduction of carcinogenic furan in various model systems containing food additives.
Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun
2015-12-15
The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate.
Parenting Styles and Adjustment Outcomes among College Students
Love, Keisha M.; Thomas, Deneia M.
2014-01-01
Research has demonstrated that parenting styles partially explain college students' academic adjustment. However, to account for academic adjustment more fully, additional contributors should be identified and tested. We examined the fit of a hypothesized model consisting of parenting styles, indicators of well-being, and academic adjustment…
Parenting Styles and Adjustment Outcomes among College Students
Love, Keisha M.; Thomas, Deneia M.
2014-01-01
Research has demonstrated that parenting styles partially explain college students' academic adjustment. However, to account for academic adjustment more fully, additional contributors should be identified and tested. We examined the fit of a hypothesized model consisting of parenting styles, indicators of well-being, and academic adjustment…
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
The atom-surface interaction potential for He-NaCl: A model based on pairwise additivity
Hutson, Jeremy M.; Fowler, P. W.
1986-08-01
The recently developed semi-empirical model of Fowler and Hutson is applied to the He-NaCl atom-surface interaction potential. Ab initio self-consistent field calculations of the repulsive interactions between He atoms and in-crystal Cl - and Na + ions are performed. Dispersion coefficients involving in-crystal ions are also calculated. The atom-surface potential is constructed using a model based on pairwise additivity of atom-ion forces. With a small adjustment of the repulsive part, this potential gives good agreement with the experimental bound state energies obtained from selective adsorption resonances in low-energy atom scattering experiments. Close-coupling calculations of the resonant scattering are performed, and good agreement with the experimental peak positions and intensity patterns is obtained. It is concluded that there are no bound states deeper than those observed in the selective adsorption experiments, and that the well depth of the He-NaCl potential is 6.0 ± 0.2 meV.
Experimental and modelling characterisation of adjustable hollow Micro-needle delivery systems.
Liu, Ting-Ting; Chen, Kai; Pan, Min
2017-09-06
Hollow micro-needles have been used increasingly less in practice because the infusion into the skin is limited by the tissue resistance to flow. The relationship between the infusion flow rate and tissue resistance pressure is not clear. A custom-made, hollow micro-needle system was used in this study. The driving force and infusion flow rate were measured using a force transducer attached to an infusion pump. Evans blue dye was injected into the air, polyacrylamide gel and in-vivo mouse skin at different flow rates. Two different micro-needle lengths were used for in-vivo infusion into the mouse. A model was derived to calculate the driving force of the micro-needle infusion into the air, and the results were compared to experimental data. The calculated driving forces match the experimental results with different infusion flow rates. The pressure loss throughout the micro-needle delivery system was found to be two orders smaller than the resistance pressure inside the gel and mouse skin, and the resistance pressure increased with increasing flow rate. A portion of liquid backflow was observed when the flow rate was relatively larger, and the backflow was associated with a sudden larger increase in resistance pressure at a higher flow rate. The current micro-needle delivery system is capable of administering liquid into the mouse skin at a flow rate of up to 0.15 ml/min, without causing significant backflow on the surface. The resistance pressure increases with increasing flow rate, causing infusion restriction at higher flow rates. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Open Source Software for Mapping Human Impacts on Marine Ecosystems with an Additive Model
Directory of Open Access Journals (Sweden)
Andy Stock
2016-06-01
Full Text Available This paper describes an easy-to-use open source software tool implementing a commonly used additive model (Halpern et al., 'Science', 2008 for mapping human impacts on marine ecosystems. The tool has been used to map the potential for cumulative human impacts in Arctic marine waters and can support future human impact mapping projects by 1 making the model easier to use; 2 making updates of model results straightforward when better input data become available; 3 storing input data and information about processing steps in a defined format and thus facilitating data sharing and reproduction of modeling results; 4 supporting basic visualization of model inputs and outputs without the need for advanced technical skills. The tool, called EcoImpactMapper, was implemented in Java and is thus platform-independent. A tutorial, example data, the tool and the source code are available online.
Estimation for an additive growth curve model with orthogonal design matrices
Hu, Jianhua; You, Jinhong; 10.3150/10-BEJ315
2012-01-01
An additive growth curve model with orthogonal design matrices is proposed in which observations may have different profile forms. The proposed model allows us to fit data and then estimate parameters in a more parsimonious way than the traditional growth curve model. Two-stage generalized least-squares estimators for the regression coefficients are derived where a quadratic estimator for the covariance of observations is taken as the first-stage estimator. Consistency, asymptotic normality and asymptotic independence of these estimators are investigated. Simulation studies and a numerical example are given to illustrate the efficiency and parsimony of the proposed model for model specifications in the sense of minimizing Akaike's information criterion (AIC).
DEFF Research Database (Denmark)
Scheike, Thomas Harder
2002-01-01
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model...... for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance...... estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different...
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
EFFECT OF NANOPOWDER ADDITION ON THE FLEXURAL STRENGTH OF ALUMINA CERAMIC - A WEIBULL MODEL ANALYSIS
Directory of Open Access Journals (Sweden)
Daidong Guo
2016-05-01
Full Text Available Alumina ceramics were prepared either with micrometer-sized alumina powder (MAP or with the addition of nanometer-sized alumina powder (NAP. The density, crystalline phase, flexural strength and the fracture surface of the two ceramics were measured and compared. Emphasis has been put on the influence of nanopowder addition on the flexural strength of Al₂O₃ ceramic. The analysis based on the Weibull distribution model suggests the distribution of the flexural strength of the NAP ceramic is more concentrated than that of the MAP ceramic. Therefore, the NAP ceramics will be more stable and reliable in real applications.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
The effect of tailor-made additives on crystal growth of methyl paraben: Experiments and modelling
Cai, Zhihui; Liu, Yong; Song, Yang; Guan, Guoqiang; Jiang, Yanbin
2017-03-01
In this study, methyl paraben (MP) was selected as the model component, and acetaminophen (APAP), p-methyl acetanilide (PMAA) and acetanilide (ACET), which share the similar molecular structure as MP, were selected as the three tailor-made additives to study the effect of tailor-made additives on the crystal growth of MP. HPLC results indicated that the MP crystals induced by the three additives contained MP only. Photographs of the single crystals prepared indicated that the morphology of the MP crystals was greatly changed by the additives, but PXRD and single crystal diffraction results illustrated that the MP crystals were the same polymorph only with different crystal habits, and no new crystal form was found compared with other references. To investigate the effect of the additives on the crystal growth, the interaction between additives and facets was discussed in detail using the DFT methods and MD simulations. The results showed that APAP, PMAA and ACET would be selectively adsorbed on the growth surfaces of the crystal facets, which induced the change in MP crystal habits.
A flexible additive inflation scheme for treating model error in ensemble Kalman Filters
Sommer, Matthias; Janjic, Tijana
2017-04-01
Data assimilation algorithms require an accurate estimate of the uncertainty of the prior, background, field. However, the background error covariance derived from the ensemble of numerical model simulations does not adequately represent the uncertainty of it. This is partially due to the sampling error that arises from the use of a small number of ensemble members to represent the background error covariance. It is also partially a consequence of the fact that the model does not represent its own error. Several mechanisms have been introduced so far aiming at alleviating the detrimental e ffects of misrepresented ensemble covariances, allowing for the successful implementation of ensemble data assimilation techniques for atmospheric dynamics. One of the established approaches in ensemble data assimilation is additive inflation which perturbs each ensemble member with a sample from a given distribution. This results in a fixed rank of the model error covariance matrix. Here, a more flexible approach is suggested where the model error samples are treated as additional synthetic ensemble members which are used in the update step of data assimilation but are not forecast. In this way, the rank of the model error covariance matrix can be chosen independently of the ensemble. The eff ect of this altered additive inflation method on the performance of the filter is analyzed here in an idealised experiment. It is shown that the additional synthetic ensemble members can make it feasible to achieve convergence in an otherwise divergent setting of data assimilation. The use of this method also allows for a less stringent localization radius.
Braun, Alexander; Kuo, Chung-Yen; Shum, C. K.; Wu, Patrick; van der Wal, Wouter; Fotopoulos, Georgia
2008-10-01
Glacial Isostatic Adjustment (GIA) modelling in North America relies on relative sea level information which is primarily obtained from areas far away from the uplift region. The lack of accurate geodetic observations in the Great Lakes region, which is located in the transition zone between uplift and subsidence due to the deglaciation of the Laurentide ice sheet, has prevented more detailed studies of this former margin of the ice sheet. Recently, observations of vertical crustal motion from improved GPS network solutions and combined tide gauge and satellite altimetry solutions have become available. This study compares these vertical motion observations with predictions obtained from 70 different GIA models. The ice sheet margin is distinct from the centre and far field of the uplift because the sensitivity of the GIA process towards Earth parameters such as mantle viscosity is very different. Specifically, the margin area is most sensitive to the uppermost mantle viscosity and allows for better constraints of this parameter. The 70 GIA models compared herein have different ice loading histories (ICE-3/4/5G) and Earth parameters including lateral heterogeneities. The root-mean-square differences between the 6 best models and the two sets of observations (tide gauge/altimetry and GPS) are 0.66 and 1.57 mm/yr, respectively. Both sets of independent observations are highly correlated and show a very similar fit to the models, which indicates their consistent quality. Therefore, both data sets can be considered as a means for constraining and assessing the quality of GIA models in the Great Lakes region and the former margin of the Laurentide ice sheet.
Directory of Open Access Journals (Sweden)
Gabriela Prelipcean
2014-02-01
Full Text Available The recent crisis and turbulences have significantly changed the consumers’ behavior, especially through its access possibility and satisfaction, but also the new dynamic flexible adjustment of the supply of goods and services. The access possibility and consumer satisfaction should be analyzed in a broader context of corporate responsibility, including financial institutions. This contribution gives an answer to the current situation in Romania as an emerging country, strongly affected by the global crisis. Empowering producers and harmonize their interests with the interests of consumers really require a significant revision of the quantitative models used to study long-term consumption-saving behavior, with a new model, adapted to the current conditions in Romania in the post-crisis context. Based on the general idea of the model developed by Hai, Krueger, Postlewaite (2013 we propose a new way of exploiting the results considering the dynamics of innovative adaptation based on Brownian motion, but also the integration of the cyclicality concept, the stochastic shocks analyzed by Lèvy and extensive interaction with capital markets characterized by higher returns and volatility.
Test of the Additivity Principle for Current Fluctuations in a Model of Heat Conduction
Hurtado, Pablo I.; Garrido, Pedro L.
2009-06-01
The additivity principle allows to compute the current distribution in many one-dimensional (1D) nonequilibrium systems. Using simulations, we confirm this conjecture in the 1D Kipnis-Marchioro-Presutti model of heat conduction for a wide current interval. The current distribution shows both Gaussian and non-Gaussian regimes, and obeys the Gallavotti-Cohen fluctuation theorem. We verify the existence of a well-defined temperature profile associated to a given current fluctuation. This profile is independent of the sign of the current, and this symmetry extends to higher-order profiles and spatial correlations. We also show that finite-time joint fluctuations of the current and the profile are described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Use of additive technologies for practical working with complex models for foundry technologies
Olkhovik, E.; Butsanets, A. A.; Ageeva, A. A.
2016-07-01
The article presents the results of research of additive technology (3D printing) application for developing a geometrically complex model of castings parts. Investment casting is well known and widely used technology for the production of complex parts. The work proposes the use of a 3D printing technology for manufacturing models parts, which are removed by thermal destruction. Traditional methods of equipment production for investment casting involve the use of manual labor which has problems with dimensional accuracy, and CNC technology which is less used. Such scheme is low productive and demands considerable time. We have offered an alternative method which consists in printing the main knots using a 3D printer (PLA and ABS) with a subsequent production of castings models from them. In this article, the main technological methods are considered and their problems are discussed. The dimensional accuracy of models in comparison with investment casting technology is considered as the main aspect.
DEFF Research Database (Denmark)
Wu, Hao; Jespersen, Jacob Boll; Aho, Martti
2013-01-01
Potassium chloride, KCl, formed from critical ash-forming elements released during combustion may lead to severe ash deposition and corrosion problems in biomass-fired boilers. Ferric sulfate, Fe2(SO4)3 is an effective additive, which produces sulfur oxides (SO2 and SO3) to convert KCl to the less...... harmful K2SO4. In the present study the decomposition of ferric sulfate is studied in a fast-heating rate thermogravimetric analyzer (TGA), and a kinetic model is proposed to describe the decomposition process. The yields of SO2 and SO3 from ferric sulfate decomposition are investigated in a laboratory......-scale tube reactor. It is revealed that approximately 40% of the sulfur is released as SO3, the remaining fraction being released as SO2. The proposed decomposition model of ferric sulfate is combined with a detailed gas phase kinetic model of KCl sulfation, and a simplified model of K2SO4 condensation...
Directory of Open Access Journals (Sweden)
Thomas Ingersoll
Full Text Available Integrated Discrete Multiple Organ Co-culture (IDMOC is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies.
Modeling the Use of Sulfate Additives for Potassium Chloride Destruction in Biomass Combustion
DEFF Research Database (Denmark)
Wu, Hao; Pedersen, Morten Nedergaard; Jespersen, Jacob Boll;
2014-01-01
Potassium chloride, KCl, formed from biomass combustion may lead to ash deposition and corrosion problems in boilers. Sulfates are effective additives for converting KCl to the less harmful K2SO4 and HCl. In the present study, the rate constants for decomposition of ammonium sulfate and aluminum...... sulfate were obtained from experiments in a fast heating rate thermogravimetric analyzer. The yields of SO2 and SO3 from the decomposition were investigated in a tube reactor at 600–900 °C, revealing a constant distribution of about 15% SO2 and 85% SO3 from aluminum sulfate decomposition and a temperature......-dependent distribution of SO2 and SO3 from ammonium sulfate decomposition. On the basis of these data as well as earlier results, a detailed chemical kinetic model for sulfation of KCl by a range of sulfate additives was established. Modeling results were compared to biomass combustion experiments in a bubbling...
Choosing components in the additive main effect and multiplicative interaction (AMMI models
Directory of Open Access Journals (Sweden)
Dias Carlos Tadeu dos Santos
2006-01-01
Full Text Available The additive main effect and multiplicative interaction (AMMI models allows analysts to detect interactions between rows and columns in a two-way table. However, there are many methods proposed in the literature to determine the number of multiplicative components to include in the AMMI model. These methods typically give different results for any particular data set, so the user needs some guidance as to which methods to use. In this paper we compare four commonly used methods using simulated data based on real experiments, and provide some general recommendations.
Modeling of Ti-W Solidification Microstructures Under Additive Manufacturing Conditions
Rolchigo, Matthew R.; Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian; Collins, Peter C.; LeSar, Richard
2017-07-01
Additive manufacturing (AM) processes have many benefits for the fabrication of alloy parts, including the potential for greater microstructural control and targeted properties than traditional metallurgy processes. To accelerate utilization of this process to produce such parts, an effective computational modeling approach to identify the relationships between material and process parameters, microstructure, and part properties is essential. Development of such a model requires accounting for the many factors in play during this process, including laser absorption, material addition and melting, fluid flow, various modes of heat transport, and solidification. In this paper, we start with a more modest goal, to create a multiscale model for a specific AM process, Laser Engineered Net Shaping (LENS™), which couples a continuum-level description of a simplified beam melting problem (coupling heat absorption, heat transport, and fluid flow) with a Lattice Boltzmann-cellular automata (LB-CA) microscale model of combined fluid flow, solute transport, and solidification. We apply this model to a binary Ti-5.5 wt pct W alloy and compare calculated quantities, such as dendrite arm spacing, with experimental results reported in a companion paper.
Samadhi, TMAA; Sumihartati, Atin
2016-02-01
The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..
Directory of Open Access Journals (Sweden)
Hai Liu
2010-10-01
Full Text Available Zero-inflation problem is very common in ecological studies as well as other areas. Nonparametric regression with zero-inflated data may be studied via the zero-inflated generalized additive model (ZIGAM, which assumes that the zero-inflated responses come from a probabilistic mixture of zero and a regular component whose distribution belongs to the 1-parameter exponential family. With the further assumption that the probability of non-zero-inflation is some monotonic function of the mean of the regular component, we propose the constrained zero-inflated generalized additive model (COZIGAM for analyzingzero-inflated data. When the hypothesized constraint obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We have developed an R package COZIGAM which contains functions that implement an iterative algorithm for fitting ZIGAMs and COZIGAMs to zero-inflated data basedon the penalized likelihood approach. Other functions included in the package are useful for model prediction and model selection. We demonstrate the use of the COZIGAM package via some simulation studies and a real application.
Medero, Rafael; García-Rodríguez, Sylvana; François, Christopher J; Roldán-Alzate, Alejandro
2017-03-21
Non-invasive hemodynamic assessment of total cavopulmonary connection (TCPC) is challenging due to the complex anatomy. Additive manufacturing (AM) is a suitable alternative for creating patient-specific in vitro models for flow measurements using four-dimensional (4D) Flow MRI. These in vitro systems have the potential to serve as validation for computational fluid dynamics (CFD), simulating different physiological conditions. This study investigated three different AM technologies, stereolithography (SLA), selective laser sintering (SLS) and fused deposition modeling (FDM), to determine differences in hemodynamics when measuring flow using 4D Flow MRI. The models were created using patient-specific MRI data from an extracardiac TCPC. These models were connected to a perfusion pump circulating water at three different flow rates. Data was processed for visualization and quantification of velocity, flow distribution, vorticity and kinetic energy. These results were compared between each model. In addition, the flow distribution obtained in vitro was compared to in vivo. The results showed significant difference in velocities measured at the outlets of the models that required internal support material when printing. Furthermore, an ultrasound flow sensor was used to validate flow measurements at the inlets and outlets of the in vitro models. These results were highly correlated to those measured with 4D Flow MRI. This study showed that commercially available AM technologies can be used to create patient-specific vascular models for in vitro hemodynamic studies at reasonable costs. However, technologies that do not require internal supports during manufacturing allow smoother internal surfaces, which makes them better suited for flow analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, T; Zhang, MJ
2011-01-01
leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....
Application of power addition as modelling technique for flow processes: Two case studies
CSIR Research Space (South Africa)
de Wet, P
2010-05-01
Full Text Available addition as modelling technique for flow processes: Two case studies Pierre de Wet a,�, J. Prieur du Plessis b, Sonia Woudberg b a Council for Scientific & Industrial Research (CSIR), PO Box 320, Stellenbosch 7599, South Africa b Applied Mathematics... research on precise, credible experimental practices is undeniable. The empirical equations derived from these investigations impart understanding of the underlying physics are crucial for the development of computational routines and form an integral...
Development of a QTL-environment-based predictive model for node addition rate in common bean.
Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J
2017-05-01
This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day(- 1)) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.
Mathematical modeling of polyphenolic additive made from grape stone with for meet products
Directory of Open Access Journals (Sweden)
Інна Олександрівна Літвінова
2015-07-01
Full Text Available In the article the optimal parameters are defined to obtain polyphenol additive made from grape stone of antioxidant purpose – "Maltovyn" by the method of mathematical planning of multifactor experiments. Research is conducted under the matrix of D-quadratic optimal plan of experiments. The results of microwave extraction process of phenolic compounds with maximum antioxidant activity are obtained. It was established that the selected model provides a set of detection values that minimize divergence of calculated and experimental data
Rain water transport and storage in a model sandy soil with hydrogel particle additives.
Wei, Y; Durian, D J
2014-10-01
We study rain water infiltration and drainage in a dry model sandy soil with superabsorbent hydrogel particle additives by measuring the mass of retained water for non-ponding rainfall using a self-built 3D laboratory set-up. In the pure model sandy soil, the retained water curve measurements indicate that instead of a stable horizontal wetting front that grows downward uniformly, a narrow fingered flow forms under the top layer of water-saturated soil. This rain water channelization phenomenon not only further reduces the available rain water in the plant root zone, but also affects the efficiency of soil additives, such as superabsorbent hydrogel particles. Our studies show that the shape of the retained water curve for a soil packing with hydrogel particle additives strongly depends on the location and the concentration of the hydrogel particles in the model sandy soil. By carefully choosing the particle size and distribution methods, we may use the swollen hydrogel particles to modify the soil pore structure, to clog or extend the water channels in sandy soils, or to build water reservoirs in the plant root zone.
The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market
DEFF Research Database (Denmark)
Andersen, Signe Hald; Hansen, Lars Gårn
Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... realigns the model’s predictions with the observed trends. The extension builds on Becker’s own claim that partners match on preference for partner specialization, and, as a novelty, on additional sociological theory claiming that preference coordination tend to happen subconsciously. When we incorporate...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
GenoGAM: genome-wide generalized additive models for ChIP-Seq analysis.
Stricker, Georg; Engelhardt, Alexander; Schulz, Daniel; Schmid, Matthias; Tresch, Achim; Gagneur, Julien
2017-08-01
Chromatin immunoprecipitation followed by deep sequencing (ChIP-Seq) is a widely used approach to study protein-DNA interactions. Often, the quantities of interest are the differential occupancies relative to controls, between genetic backgrounds, treatments, or combinations thereof. Current methods for differential occupancy of ChIP-Seq data rely however on binning or sliding window techniques, for which the choice of the window and bin sizes are subjective. Here, we present GenoGAM (Genome-wide Generalized Additive Model), which brings the well-established and flexible generalized additive models framework to genomic applications using a data parallelism strategy. We model ChIP-Seq read count frequencies as products of smooth functions along chromosomes. Smoothing parameters are objectively estimated from the data by cross-validation, eliminating ad hoc binning and windowing needed by current approaches. GenoGAM provides base-level and region-level significance testing for full factorial designs. Application to a ChIP-Seq dataset in yeast showed increased sensitivity over existing differential occupancy methods while controlling for type I error rate. By analyzing a set of DNA methylation data and illustrating an extension to a peak caller, we further demonstrate the potential of GenoGAM as a generic statistical modeling tool for genome-wide assays. Software is available from Bioconductor: https://www.bioconductor.org/packages/release/bioc/html/GenoGAM.html . gagneur@in.tum.de. Supplementary information is available at Bioinformatics online.
Institute of Scientific and Technical Information of China (English)
卢毓敏; 马胜生; 罗铭; 梁纳
2015-01-01
目的：：观察中周部加光镜片联合调节训练治疗儿童近视的临床效果。方法：选取我院2014-01/2015-07门诊就诊的儿童近视患者80例160眼,随机分为治疗组和对照组,每组各40例80眼。治疗组采用中周部加光镜片联合调节训练治疗近视,对照组则采用配戴普通单光框架眼镜的常规方法治疗近视,两组患者戴镜后每3 mo复查,1 a后观察各组近视进展指标、调节功能指标,比较分析两组的治疗效果。结果：戴镜1a后治疗组病例裸眼视力、屈光度数、眼轴长度较戴镜前变化不大趋于稳定,与戴镜前比较差异无统计学意义(P>0.05)；对照组病例裸眼视力较戴镜前下降,屈光度数增加,眼轴增长,与戴镜前比较差异有统计学意义(P0.05)。两组间比较差异有统计学意义(P 0. 05 ). The visual acuity decreased, refraction and axial length increased in comparison group, the differences were statistically significant (P0. 05). The difference between the two groups was statistically significant ( P<0. 01).•CONCLUSlON:Midperipherv additional designed lenses and adjustment training treatment of juvenile myopia is effective, which can delay the diopters development of myopic children, improve the regulatory function, control the development of myopia, improve the adjustment function.
Betts, Lucy R; Rotenberg, Ken J; Trueman, Mark
2009-06-01
The study aimed to examine the relationship between self-knowledge of trustworthiness and young children's school adjustment. One hundred and seventy-three (84 male and 89 female) children from school years 1 and 2 in the United Kingdom (mean age 6 years 2 months) were tested twice over 1-year. Children's trustworthiness was assessed using: (a) self-report at Time 1 and Time 2; (b) peers reports at Time 1 and Time 2; and (c) teacher-reports at Time 2. School adjustment was assessed by child-rated school-liking and the Short-Form Teacher Rating Scale of School Adjustment (Short-Form TRSSA). Longitudinal quadratic relationships were found between school adjustment and children's self-knowledge, using peer-reported trustworthiness as a reference: more accurate self-knowledge of trustworthiness predicted increases in school adjustment. Comparable concurrent quadratic relationships were found between teacher-rated school adjustment and children's self-knowledge, using teacher-reported trustworthiness as a reference, at Time 2. The findings support the conclusion that young children's psychosocial adjustment is best accounted for by the realistic self-knowledge model (Colvin & Block, 1994).
Seo, Seulgi; Ka, Mi-Hyun; Lee, Kwang-Geun
2014-07-09
The effect of various food additives on the formation of carcinogenic 4(5)-methylimidazole (4-MI) in a caramel model system was investigated. The relationship between the levels of 4-MI and various pyrazines was studied. When glucose and ammonium hydroxide were heated, the amount of 4-MI was 556 ± 1.3 μg/mL, which increased to 583 ± 2.6 μg/mL by the addition of 0.1 M of sodium sulfite. When various food additives, such as 0.1 M of iron sulfate, magnesium sulfate, zinc sulfate, tryptophan, and cysteine were added, the amount of 4-MI was reduced to 110 ± 0.7, 483 ± 2.0, 460 ± 2.0, 409 ± 4.4, and 397 ± 1.7 μg/mL, respectively. The greatest reduction, 80%, occurred with the addition of iron sulfate. Among the 12 pyrazines, 2-ethyl-6-methylpyrazine with 4-MI showed the highest correlation (r = -0.8239).
Hocking, Matthew C.; Lochman, John E.
2005-01-01
This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…
Minimax-optimal rates for sparse additive models over kernel classes via convex programming
Raskutti, Garvesh; Yu, Bin
2010-01-01
Sparse additive models are families of $d$-variate functions that have the additive decomposition \\mbox{$f^* = \\sum_{j \\in S} f^*_j$,} where $S$ is a unknown subset of cardinality $s \\ll d$. We consider the case where each component function $f^*_j$ lies in a reproducing kernel Hilbert space, and analyze a simple kernel-based convex program for estimating the unknown function $f^*$. Working within a high-dimensional framework that allows both the dimension $d$ and sparsity $s$ to scale, we derive convergence rates in the $L^2(\\mathbb{P})$ and $L^2(\\mathbb{P}_n)$ norms. These rates consist of two terms: a \\emph{subset selection term} of the order $\\frac{s \\log d}{n}$, corresponding to the difficulty of finding the unknown $s$-sized subset, and an \\emph{estimation error} term of the order $s \\, \
Utilization of sulfate additives in biomass combustion: fundamental and modeling aspects
DEFF Research Database (Denmark)
Wu, Hao; Jespersen, Jacob Boll; Grell, Morten Nedergaard;
2013-01-01
Sulfates, such as ammonium sulfate, aluminum sulfate and ferric sulfate, are effective additives for converting the alkali chlorides released from biomass combustion to the less harmful alkali sulfates. Optimization of the use of these additives requires knowledge on their decomposition rate...... and product distribution under high temperature conditions. In the present work, the decomposition of ammonium sulfate, aluminum sulfate and ferric sulfate was studied respectively in a fast-heating rate thermogravimetric analyzer for deriving a kinetic model to describe the process. The yields of SO2 and SO3...... of different sulfates indicated that ammonium sulfate has clearly strongest sulfation power towards KCl at temperatures below 800oC, whereas the sulfation power of ferric and aluminum sulfates exceeds clearly that of ammonium sulfate between 900 and 1000oC. However, feeding gaseous SO3 was found to be most...
Topsoil organic carbon content of Europe, a new map based on a generalised additive model
de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas
2014-05-01
There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average
Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M
2009-10-01
Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.
Analysis of two-phase sampling data with semiparametric additive hazards models.
Sun, Yanqing; Qian, Xiyuan; Shou, Qiong; Gilbert, Peter B
2017-07-01
Under the case-cohort design introduced by Prentice (Biometrica 73:1-11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.
AlRamadan, Abdullah S.
2015-10-01
The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.
Sun, Xiaowei; Li, Wei; Xie, Yulei; Huang, Guohe; Dong, Changjuan; Yin, Jianguang
2016-11-01
A model based on economic structure adjustment and pollutants mitigation was proposed and applied in Urumqi. Best-worst case analysis and scenarios analysis were performed in the model to guarantee the parameters accuracy, and to analyze the effect of changes of emission reduction styles. Results indicated that pollutant-mitigations of electric power industry, iron and steel industry, and traffic relied mainly on technological transformation measures, engineering transformation measures and structure emission reduction measures, respectively; Pollutant-mitigations of cement industry relied mainly on structure emission reduction measures and technological transformation measures; Pollutant-mitigations of thermal industry relied mainly on the four mitigation measures. They also indicated that structure emission reduction was a better measure for pollutants mitigation of Urumqi. Iron and steel industry contributed greatly in SO2, NOx and PM (particulate matters) emission reduction and should be given special attention in pollutants emission reduction. In addition, the scales of iron and steel industry should be reduced with the decrease of SO2 mitigation amounts. The scales of traffic and electric power industry should be reduced with the decrease of NOx mitigation amounts, and the scales of cement industry and iron and steel industry should be reduced with the decrease of PM mitigation amounts. The study can provide references of pollutants mitigation schemes to decision-makers for regional economic and environmental development in the 12th Five-Year Plan on National Economic and Social Development of Urumqi.
Directory of Open Access Journals (Sweden)
Mentz Graciela
2012-09-01
Full Text Available Abstract Background Accurate estimates of hypertension prevalence are critical for assessment of population health and for planning and implementing prevention and health care programs. While self-reported data is often more economically feasible and readily available compared to clinically measured HBP, these reports may underestimate clinical prevalence to varying degrees. Understanding the accuracy of self-reported data and developing prediction models that correct for underreporting of hypertension in self-reported data can be critical tools in the development of more accurate population level estimates, and in planning population-based interventions to reduce the risk of, or more effectively treat, hypertension. This study examines the accuracy of self-reported survey data in describing prevalence of clinically measured hypertension in two racially and ethnically diverse urban samples, and evaluates a mechanism to correct self-reported data in order to more accurately reflect clinical hypertension prevalence. Methods We analyze data from the Detroit Healthy Environments Partnership (HEP Survey conducted in 2002 and the National Health and Nutrition Examination (NHANES 2001–2002 restricted to urban areas and participants 25 years and older. We re-calibrate measures of agreement within the HEP sample drawing upon parameter estimates derived from the NHANES urban sample, and assess the quality of the adjustment proposed within the HEP sample. Results Both self-reported and clinically assessed prevalence of hypertension were higher in the HEP sample (29.7 and 40.1, respectively compared to the NHANES urban sample (25.7 and 33.8, respectively. In both urban samples, self-reported and clinically assessed prevalence is higher than that reported in the full NHANES sample in the same year (22.9 and 30.4, respectively. Sensitivity, specificity and accuracy between clinical and self-reported hypertension prevalence were ‘moderate to good’ within
Micellar Effects on Nucleophilic Addition Reaction and Applicability of Enzyme Catalysis Model
Directory of Open Access Journals (Sweden)
R. K. London Singh
2012-01-01
Full Text Available This study describes the effect of anionic and cationic micelles on nucleophilic addition reaction of rosaniline hydrochloride (RH with hydroxide under pseudo-first order condition. Strong inhibitory effect is observed due to SDS micelle, whereas CTAB catalysed the reaction. This is explained on the basis of electrostatic and hydrophobic interactions which are simultaneously operating in the reaction system. The kinetic data obtained is quantitatively analysed by applying the positive cooperativity model of enzyme catalysis. Binding constants and influence of counterions on the reaction have also been investigated.
An Empirical Research on the Model of the Right in Additional Allocation of Stocks
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
How to define the value of the Right in Additional Al location of Stocks (RAAS) acts an important role in stock markets whether or not the shareholders execute the right. Moreover, the valuation defining of RAAS an d the exercise price (K) are mutual cause and effect. Based on some literatures on this subject, this paper presents a model valuing the RAAS per-share. With t he opening information in ShenZheng Stock Markets, we make a simulation on the R AAS's value of shenwuye, which is a shareholding corp...
Directory of Open Access Journals (Sweden)
F. Menna
2015-04-01
Full Text Available The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air. A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1. The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition.
Austin, Peter C.; Reeves, Mathew J.
2015-01-01
Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579
Hybrid 2D-3D modelling of GTA welding with filler wire addition
Traidia, Abderrazak
2012-07-01
A hybrid 2D-3D model for the numerical simulation of Gas Tungsten Arc welding is proposed in this paper. It offers the possibility to predict the temperature field as well as the shape of the solidified weld joint for different operating parameters, with relatively good accuracy and reasonable computational cost. Also, an original approach to simulate the effect of immersing a cold filler wire in the weld pool is presented. The simulation results reveal two important observations. First, the weld pool depth is locally decreased in the presence of filler metal, which is due to the energy absorption by the cold feeding wire from the hot molten pool. In addition, the weld shape, maximum temperature and thermal cycles in the workpiece are relatively well predicted even when a 2D model for the arc plasma region is used. © 2012 Elsevier Ltd. All rights reserved.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Dynamic Behavior of Artificial Hodgkin-Huxley Neuron Model Subject to Additive Noise.
Kang, Qi; Huang, BingYao; Zhou, MengChu
2016-09-01
Motivated by neuroscience discoveries during the last few years, many studies consider pulse-coupled neural networks with spike-timing as an essential component in information processing by the brain. There also exists some technical challenges while simulating the networks of artificial spiking neurons. The existing studies use a Hodgkin-Huxley (H-H) model to describe spiking dynamics and neuro-computational properties of each neuron. But they fail to address the effect of specific non-Gaussian noise on an artificial H-H neuron system. This paper aims to analyze how an artificial H-H neuron responds to add different types of noise using an electrical current and subunit noise model. The spiking and bursting behavior of this neuron is also investigated through numerical simulations. In addition, through statistic analysis, the intensity of different kinds of noise distributions is discussed to obtain their relationship with the mean firing rate, interspike intervals, and stochastic resonance.
Phase-Field Modeling of Microstructure Evolution in Electron Beam Additive Manufacturing
Gong, Xibing; Chou, Kevin
2015-05-01
In this study, the microstructure evolution in the powder-bed electron beam additive manufacturing (EBAM) process is studied using phase-field modeling. In essence, EBAM involves a rapid solidification process and the properties of a build partly depend on the solidification behavior as well as the microstructure of the build material. Thus, the prediction of microstructure evolution in EBAM is of importance for its process optimization. Phase-field modeling was applied to study the microstructure evolution and solute concentration of the Ti-6Al-4V alloy in the EBAM process. The effect of undercooling was investigated through the simulations; the greater the undercooling, the faster the dendrite grows. The microstructure simulations show multiple columnar-grain growths, comparable with experimental results for the tested range.
Chang, E C; Sanna, L J
2001-09-01
This study attempted to address limitations in the understanding of optimism and pessimism among middle-aged adults. Specifically, a model of affectivity as a mediator of the link between outcome expectancies and psychological adjustment (life satisfaction and depressive symptoms) was presented and examined in a sample of 237 middle-aged adults. Consistent with a mediation model, results of path analyses indicated that optimism and pessimism (particularly the former) had significant direct and indirect links (by means of positive and negative affectivity) with depressive symptoms and life satisfaction. These results add to the small but growing literature identifying optimism and pessimism as important concomitants of psychological adjustment in more mature adults.
Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model
Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.
2015-01-01
The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065
Guarana provides additional stimulation over caffeine alone in the planarian model.
Directory of Open Access Journals (Sweden)
Dimitrios Moustakas
Full Text Available The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.
Quantifying spatial disparities in neonatal mortality using a structured additive regression model.
Directory of Open Access Journals (Sweden)
Lawrence N Kazembe
Full Text Available BACKGROUND: Neonatal mortality contributes a large proportion towards early childhood mortality in developing countries, with considerable geographical variation at small areas within countries. METHODS: A geo-additive logistic regression model is proposed for quantifying small-scale geographical variation in neonatal mortality, and to estimate risk factors of neonatal mortality. Random effects are introduced to capture spatial correlation and heterogeneity. The spatial correlation can be modelled using the Markov random fields (MRF when data is aggregated, while the two dimensional P-splines apply when exact locations are available, whereas the unstructured spatial effects are assigned an independent Gaussian prior. Socio-economic and bio-demographic factors which may affect the risk of neonatal mortality are simultaneously estimated as fixed effects and as nonlinear effects for continuous covariates. The smooth effects of continuous covariates are modelled by second-order random walk priors. Modelling and inference use the empirical Bayesian approach via penalized likelihood technique. The methodology is applied to analyse the likelihood of neonatal deaths, using data from the 2000 Malawi demographic and health survey. The spatial effects are quantified through MRF and two dimensional P-splines priors. RESULTS: Findings indicate that both fixed and spatial effects are associated with neonatal mortality. CONCLUSIONS: Our study, therefore, suggests that the challenge to reduce neonatal mortality goes beyond addressing individual factors, but also require to understanding unmeasured covariates for potential effective interventions.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.
Fan, Jianqing; Feng, Yang; Song, Rui
2011-06-01
A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.
Combining neuroprotectants in a model of retinal degeneration: no additive benefit.
Directory of Open Access Journals (Sweden)
Fabiana Di Marco
Full Text Available The central nervous system undergoing degeneration can be stabilized, and in some models can be restored to function, by neuroprotective treatments. Photobiomodulation (PBM and dietary saffron are distinctive as neuroprotectants in that they upregulate protective mechanisms, without causing measurable tissue damage. This study reports a first attempt to combine the actions of PBM and saffron. Our working hypothesis was that the actions of PBM and saffron in protecting retinal photoreceptors, in a rat light damage model, would be additive. Results confirmed the neuroprotective potential of each used separately, but gave no evidence that their effects are additive. Detailed analysis suggests that there is actually a negative interaction between PBM and saffron when given simultaneously, with a consequent reduction of the neuroprotection. Specific testing will be required to understand the mechanisms involved and to establish whether there is clinical potential in combining neuroprotectants, to improve the quality of life of people affected by retinal pathology, such as age-related macular degeneration, the major cause of blindness and visual impairment in older adults.
Chan, H S
2000-09-01
A well-established experimental criterion for two-state thermodynamic cooperativity in protein folding is that the van't Hoff enthalpy DeltaH(vH) around the transition midpoint is equal, or very nearly so, to the calorimetric enthalpy DeltaH(cal) of the entire transition. This condition is satisfied by many small proteins. We use simple lattice models to provide a statistical mechanical framework to elucidate how this calorimetric two-state picture may be reconciled with the hierarchical multistate scenario emerging from recent hydrogen exchange experiments. We investigate the feasibility of using inverse Laplace transforms to recover the underlying density of states (i.e., enthalpy distribution) from calorimetric data. We find that the constraint imposed by DeltaH(vH)/DeltaH(cal) approximately 1 on densities of states of proteins is often more stringent than other "two-state" criteria proposed in recent theoretical studies. In conjunction with reasonable assumptions, the calorimetric two-state condition implies a narrow distribution of denatured-state enthalpies relative to the overall enthalpy difference between the native and the denatured conformations. This requirement does not always correlate with simple definitions of "sharpness" of a transition and has important ramifications for theoretical modeling. We find that protein models that assume capillarity cooperativity can exhibit overall calorimetric two-state-like behaviors. However, common heteropolymer models based on additive hydrophobic-like interactions, including highly specific two-dimensional Gō models, fail to produce proteinlike DeltaH(vH)/DeltaH(cal) approximately 1. A simple model is constructed to illustrate a proposed scenario in which physically plausible local and nonlocal cooperative terms, which mimic helical cooperativity and environment-dependent hydrogen bonding strength, can lead to thermodynamic behaviors closer to experiment. Our results suggest that proteinlike thermodynamic
Institute of Scientific and Technical Information of China (English)
郑蓉建; 周林成; 潘丰
2012-01-01
Fault monitoring of bioprocess is important to ensure safety of a reactor and maintain high quality of products. It is difficult to build an accurate mechanistic model for a bioprocess, so fault monitoring based on rich historical or online database is an effective way. A group of data based on bootstrap method could be resampling stochastically, improving generalization capability of model. In this paper, online fault monitoring of generalized additive models (GAMs) combining with bootstrap is proposed for glutamate fermentation process. GAMs and bootstrap are first used to decide confidence interval based on the online and off-line normal sampled data from glutamate fermentation experiments. Then GAMs are used to online fault monitoring for time, dissolved oxygen, oxygen uptake rate, and carbon dioxide evolution rate. The method can provide accurate fault alarm online and is helpful to provide useful information for removing fault and abnormal phenomena in the fermentation.
Directory of Open Access Journals (Sweden)
IRNANDA AIKO FIFI DJUUNA
2010-07-01
Full Text Available Djuuna IAF, Abbott LK, Van Niel K (2010 Predicting infectivity of Arbuscular Mycorrhizal fungi from soil variables using Generalized Additive Models and Generalized Linear Models. Biodiversitas 11: 145-150. The objective of this study was to predict the infectivity of arbuscular mycorrhizal fungi (AM fungi, from field soil based on soil properties and land use history using generalized additive models (GAMs and generalized linear models (GLMs. A total of 291 soil samples from a farm in Western Australia near Wickepin were collected and used in this study. Nine soil properties, including elevation, pH, EC, total C, total N, P, K, microbial biomass carbon, and soil texture, and land use history of the farm were used as independent variables, while the percentage of root length colonized (%RLC was used as the dependent variable. GAMs parameterized for the percent of root length colonized suggested skewed quadratic responses to soil pH and microbial biomass carbon; cubic responses to elevation and soil K; and linear responses to soil P, EC and total C. The strength of the relationship between percent root length colonized by AM fungi and environmental variables showed that only elevation, total C and microbial biomass carbon had strong relationships. In general, GAMs and GLMs models confirmed the strong relationship between infectivity of AM fungi (assessed in a glasshouse bioassay for soil collected in summer prior to the first rain of the season and soil properties.
Speckman, McGlory
2016-01-01
Predicated on the principles of success and contextuality, this chapter shares an African perspective on a first-year adjustment programme, known as First-Year Village, including its potential and challenges in establishing it.
Quasi-additive estimates on the Hamiltonian for the one-dimensional long range Ising model
Littin, Jorge; Picco, Pierre
2017-07-01
In this work, we study the problem of getting quasi-additive bounds for the Hamiltonian of the long range Ising model, when the two-body interaction term decays proportionally to 1/d2 -α , α ∈(0,1 ) . We revisit the paper by Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] where they extend to the case α ∈[0 ,ln3/ln2 -1 ) the result of the existence of a phase transition by using a Peierls argument given by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)] for α =0 . The main arguments of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)] are based in a quasi-additive decomposition of the Hamiltonian in terms of hierarchical structures called triangles and contours, which are related to the original definition of contours introduced by Fröhlich and Spencer [Commun. Math. Phys. 84, 87-101 (1982)]. In this work, we study the existence of a quasi-additive decomposition of the Hamiltonian in terms of the contours defined in the work of Cassandro et al. [J. Math. Phys. 46, 053305 (2005)]. The most relevant result obtained is Theorem 4.3 where we show that there is a quasi-additive decomposition for the Hamiltonian in terms of contours when α ∈[0,1 ) but not in terms of triangles. The fact that it cannot be a quasi-additive bound in terms of triangles lead to a very interesting maximization problem whose maximizer is related to a discrete Cantor set. As a consequence of the quasi-additive bounds, we prove that we can generalise the [Cassandro et al., J. Math. Phys. 46, 053305 (2005)] result, that is, a Peierls argument, to the whole interval α ∈[0,1 ) . We also state here the result of Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] about cluster expansions which implies that Theorem 2.4 that concerns interfaces and Theorem 2.5 that concerns n point truncated correlation functions in Cassandro et al. [Commun. Math. Phys. 327, 951-991 (2014)] are valid for all α ∈[0,1 ) instead of only α ∈[0 ,ln3/ln2 -1 ) .
Institute of Scientific and Technical Information of China (English)
XIA Shenzhen; KE Changqing; ZHOU Xiaobing; ZHANG Jie
2016-01-01
Thein situ sea surface salinity (SSS) measurements from a scientific cruise to the western zone of the southeast Indian Ocean covering 30°-60°S, 80°-120°E are used to assess the SSS retrieved from Aquarius (Aquarius SSS). Wind speed and sea surface temperature (SST) affect the SSS estimates based on passive microwave radiation within the mid- to low-latitude southeast Indian Ocean. The relationships among thein situ, Aquarius SSS and wind-SST corrections are used to adjust the Aquarius SSS. The adjusted Aquarius SSS are compared with the SSS data from MyOcean model. Results show that: (1) Before adjustment: compared with MyOcean SSS, the Aquarius SSS in most of the sea areas is higher; but lower in the low-temperature sea areas located at the south of 55°S and west of 98°E. The Aquarius SSS is generally higher by 0.42 on average for the southeast Indian Ocean. (2) After adjustment: the adjustment greatly counteracts the impact of high wind speeds and improves the overall accuracy of the retrieved salinity (the mean absolute error of the Zonal mean is improved by 0.06, and the mean error is -0.05 compared with MyOcean SSS). Near the latitude 42°S, the adjusted SSS is well consistent with the MyOcean and the difference is approximately 0.004.
Oil Shocks and Macroeconomic Adjustment: a DSGE modeling approach for the Case of Libya, 1970–2007
Directory of Open Access Journals (Sweden)
Issa Ali
2011-01-01
Full Text Available Libya experienced a substantial increase in oil revenue as a result of increased oil prices during the period of the late 1970s and early 1980s, and again after 2000. Recent increases in oil production and the price of oil, and their positive and negative macroeconomic impacts upon key macroeconomic variables, are of considerable contemporary importance to an oil dependent economy such as that of Libya. In this paper a dynamic macroeconomic model is developed for Libya to evaluate the effects of additional oil revenue, arising from positive oil production and oil price shocks, upon key macroeconomic variables, including the real exchange rate. It takes into consideration the impact of oil revenue upon the non-oil trade balance, foreign asset stock, physical capital stock, human capital stock, imported capital stock and non-oil production. Model simulation results indicate that additional oil revenue brings about: an increase in government revenue, increased government spending in the domestic economy, increased foreign asset stocks, increased output and wages in the non oil sector. However, increased oil revenue may also produce adverse consequences, particularly upon the non-oil trade balance, arising from a loss of competitiveness of non-oil tradable goods induced by an appreciation of the real exchange rate and increased imports stimulated by increased real income. Model simulation results also suggest that investment stimulating policy measures by government produce the most substantive benefits for the economy.
Impact of an additional chronic BDNF reduction on learning performance in an Alzheimer mouse model
Directory of Open Access Journals (Sweden)
Laura ePsotta
2015-03-01
Full Text Available There is increasing evidence that brain-derived neurotrophic factor (BDNF plays a crucial role in AD pathology. A number of studies demonstrated that AD patients exhibit reduced BDNF levels in the brain and the blood serum, and in addition, several animal-based studies indicated a potential protective effect of BDNF against Aβ-induced neurotoxicity. In order to further investigate the role of BDNF in the etiology of AD, we created a novel mouse model by crossing a well-established AD mouse model (APP/PS1 with a mouse exhibiting a chronic BDNF deficiency (BDNF+/-. This new triple transgenic mouse model enabled us to further analyze the role of BDNF in AD in vivo. We reasoned that in case BDNF has a protective effect against AD pathology, an AD-like phenotype in our new mouse model should occur earlier and/or in more severity than in the APP/PS1-mice. Indeed, the behavioral analysis re-vealed that the APP/PS1-BDNF+/--mice show an earlier onset of learning impairments in a two-way active avoidance task in comparison to APP/PS1- and BDNF+/--mice. However in the Morris water maze test, we could not observe an overall aggrevated impairment in spatial learning and also short-term memory in an object recognition task remained intact in all tested mouse lines. In addition to the behavioral experiments, we analyzed the amyloid plaque pa-thology in the APP/PS1 and APP/PS1-BDNF+/--mice and observed a comparable plaque den-sity in the two genotypes. Moreover, our results revealed a higher plaque density in prefrontal cortical compared to hippocampal brain regions. Our data reveal that higher cognitive tasks requiring the recruitment of cortical networks appear to be more severely affected in our new mouse model than learning tasks requiring mainly sub-cortical networks. Furthermore, our observations of an accelerated impairment in active avoidance learning in APP/PS1-BDNF+/--mice further supports the hypothesis that BDNF deficiency amplifies AD
Nonlinear feedback in a six-dimensional Lorenz Model: impact of an additional heating term
Directory of Open Access Journals (Sweden)
B.-W. Shen
2015-03-01
Full Text Available In this study, a six-dimensional Lorenz model (6DLM is derived, based on a recent study using a five-dimensional (5-D Lorenz model (LM, in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the steamfunction is referred to as a secondary streamfunction mode, while the two additional modes, that appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74, but slightly smaller than the one in the 5DLM (rc ~ 42.9. A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1 negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2 the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3 overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization, consistent with the following statement by Lorenz (1972: If the flap of a butterfly's wings can be instrumental in generating a tornado, it
Modeling additional solar constraints on a human being inside a room
Energy Technology Data Exchange (ETDEWEB)
Thellier, Francoise; Monchoux, Francoise; Bonnis-Sassi, Michel; Lartigue, Berengere [Laboratoire Physique de l' Homme Appliquee a Son Environnement (PHASE), Universite Paul Sabatier, 118, route de Narbonne, F-31062 Toulouse Cedex 9 (France)
2008-04-15
Sun fluxes induce additional heterogeneous thermal constraints in buildings and may also lead to discomfort for the inhabitant. To calculate the local thermal sensation of a human being totally or partially situated in the sunlight, the solar radiation inside a room and its detailed distribution on parts of the human body are modeled. The present study focuses on the solar gains part of a complete modeling tool simulating an occupied building. The irradiated areas are calculated with a ray tracing method taking shadow into account. Solar fluxes are computed. Fluxes can be absorbed by each surface or reflected. The reflected fluxes are then absorbed at the next impact. A multi-node thermoregulation model (MARCL) represents the thermal behavior of the human body and all its heat exchanges with the environment. The thermal transient simulation of the whole occupied building is performed in TRNSYS simulation software. In the case presented here, the results show that, when a person is inside the building, the skin and clothing temperatures of the irradiated segments increase more or less depending on the segments but the global thermal equilibrium of the body is maintained thanks to strong physiological reactions. (author)
A Bayesian additive model for understanding public transport usage in special events.
Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco
2016-12-02
Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.
Saunders, James E; Barrs, David M; Gong, Wenfeng; Wilson, Blake S; Mojica, Karen; Tucci, Debara L
2015-09-01
Cochlear implantation (CI) is a common intervention for severe-to-profound hearing loss in high-income countries, but is not commonly available to children in low resource environments. Owing in part to the device costs, CI has been assumed to be less economical than deaf education for low resource countries. The purpose of this study is to compare the cost effectiveness of the two interventions for children with severe-to-profound sensorineural hearing loss (SNHL) in a model using disability adjusted life years (DALYs). Cost estimates were derived from published data, expert opinion, and known costs of services in Nicaragua. Individual costs and lifetime DALY estimates with a 3% discounting rate were applied to both two interventions. Sensitivity analysis was implemented to evaluate the effect on the discounted cost of five key components: implant cost, audiology salary, speech therapy salary, number of children implanted per year, and device failure probability. The costs per DALY averted are $5,898 and $5,529 for CI and deaf education, respectively. Using standards set by the WHO, both interventions are cost effective. Sensitivity analysis shows that when all costs set to maximum estimates, CI is still cost effective. Using a conservative DALY analysis, both CI and deaf education are cost-effective treatment alternatives for severe-to-profound SNHL. CI intervention costs are not only influenced by the initial surgery and device costs but also by rehabilitation costs and the lifetime maintenance, device replacement, and battery costs. The major CI cost differences in this low resource setting were increased initial training and infrastructure costs, but lower medical personnel and surgery costs.
Li, Yan; Costanzo, Philip R; Putallaz, Martha
2010-01-01
The authors compared the associations among perceived maternal socialization goals (self-development, filial piety, and collectivism), perceived maternal parenting styles (authoritative, authoritarian, and training), and the social-emotional adjustment (self-esteem, academic self-efficacy, and depression) between Chinese and European American young adults. The mediation processes in which socialization goals relate to young adults' adjustment outcomes through parenting styles were examined. Results showed that European American participants perceived higher maternal self-development socialization goals, whereas Chinese participants perceived higher maternal collectivism socialization goals as well as more authoritarian parenting. Cross-cultural similarities were found in the associations between perceived maternal authoritative parenting and socioemotional adjustment (e.g., higher self-esteem and higher academic self-efficacy) across the two cultural groups. However, perceived maternal authoritarian and training parenting styles were found only to be related to Chinese participants' adjustment (e.g., higher academic self-efficacy and lower depression). The mediation analyses showed that authoritative parenting significantly mediated the positive associations between the self-development and collectivism goal and socioemotional adjustment for both cultural groups. Additionally, training parenting significantly mediated the positive association between the filial piety goal and young adults' academic self-efficacy for the Chinese group only. Findings of this study highlight the importance of examining parental socialization goals in cross-cultural parenting research.
Martínez-Rincón, Raúl O; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G
2017-01-01
Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis.
Martínez-Rincón, Raúl O.; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G.
2017-01-01
Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis. PMID:28158248
Aggregation of gluten proteins in model dough after fibre polysaccharide addition.
Nawrocka, Agnieszka; Szymańska-Chargot, Monika; Miś, Antoni; Wilczewska, Agnieszka Z; Markiewicz, Karolina H
2017-09-15
FT-Raman spectroscopy, thermogravimetry and differential scanning calorimetry were used to study changes in structure of gluten proteins and their thermal properties influenced by four dietary fibre polysaccharides (microcrystalline cellulose, inulin, apple pectin and citrus pectin) during development of a model dough. The flour reconstituted from wheat starch and wheat gluten was mixed with the polysaccharides in five concentrations: 3%, 6%, 9%, 12% and 18%. The obtained results showed that all polysaccharides induced similar changes in secondary structure of gluten proteins concerning formation of aggregates (1604cm(-1)), H-bonded parallel- and antiparallel-β-sheets (1690cm(-1)) and H-bonded β-turns (1664cm(-1)). These changes concerned mainly glutenins since β-structures are characteristic for them. The observed structural changes confirmed hypothesis about partial dehydration of gluten network after polysaccharides addition. The gluten aggregation and dehydration processes were also reflected in the DSC results, while the TGA ones showed that gluten network remained thermally stable after polysaccharides addition. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impact of biochar addition on thermal properties of a sandy soil: modelling approach
Usowicz, Boguslaw; Lipiec, Jerzy; Lukowski, Mateusz; Bis, Zbigniew; Marczewski, Wojciech; Usowicz, Jerzy
2017-04-01
Adding biochar can alter soil thermal properties and increase the water holding capacity and reduce the mineral soil fertilization. Biochar in the soil can determine the heat balance on the soil surface and the temperature distribution in the soil profile through changes in albedo and the thermal properties. Besides, amendment of soil with biochar results in improvement of water retention, fertility and pH that are of importance in sandy and acid soils, widely used in agriculture. In this study we evaluated the effects of wood-derived biochar (0, 10, 20, and 40 Mg ha-1) incorporated to a depth of 0-15 cm on the thermal conductivity, heat capacity, thermal diffusivity and porosity in sandy soil under field conditions. In addition, soil-biochar mixtures of various percentages of biochar were prepared to determine the thermal properties in function of soil water status and density in laboratory. It was shown that a small quantity of biochar added to the soil does not significantly affect all the thermal properties of the soil. Increasing biochar concentration significantly enhanced porosity and decreased thermal conductivity and diffusivity with different rate depending on soil water status. The soil thermal conductivity and diffusivity varied widely and non-linearly with water content for different biochar content and soil bulk density. However, the heat capacity increased with biochar addition and water content linearly and was greater at higher than lower soil water contents. The measured and literature thermal data were compared with those obtained from the analytic model of Zhang et al. (2013) and statistical-physical model (Usowicz et al., 2016) based on soil texture, biochar content, bulk density and water content.
基于UML的高校调串课系统的建模研究%UML modeling on university teacher adjustable course system
Institute of Scientific and Technical Information of China (English)
阎琦
2013-01-01
The university teacher adjustable course system is the network application software used in teacher' s adjustable course for schools of higher education. In the course of demand analysis, the whole system is divided into six parts, such as teacher adjustable course module, teaching secretary module, academic administration adjustable course module, course information module,etc. The use of language UML can realize the object-oriented analysis and modeling, completing the system static modeling and dynamic modeling. In database design, E-R diagram establishes the database concept model.%高校调串课系统是适用于高等院校教师调串课的网络应用软件.在需求分析过程中,将整个系统分为教师调串课模块、教学秘书模块、教务处调串课管理模块和课程信息模块等6部分,使用统一建模语言UML对系统进行面向对象的分析和建模,完成了系统的静态建模和动态建模.在数据库设计中用E-R图建立了数据库的概念模型.
Dong, Wenming; Wan, Jiamin
2014-06-17
Many aquifers contaminated by U(VI)-containing acidic plumes are composed predominantly of quartz-sand sediments. The F-Area of the Savannah River Site (SRS) in South Carolina (USA) is an example. To predict U(VI) mobility and natural attenuation, we conducted U(VI) adsorption experiments using the F-Area plume sediments and reference quartz, goethite, and kaolinite. The sediments are composed of ∼96% quartz-sand and 3-4% fine fractions of kaolinite and goethite. We developed a new humic acid adsorption method for determining the relative surface area abundances of goethite and kaolinite in the fine fractions. This method is expected to be applicable to many other binary mineral pairs, and allows successful application of the component additivity (CA) approach based surface complexation modeling (SCM) at the SRS F-Area and other similar aquifers. Our experimental results indicate that quartz has stronger U(VI) adsorption ability per unit surface area than goethite and kaolinite at pH ≤ 4.0. Our modeling results indicate that the binary (goethite/kaolinite) CA-SCM under-predicts U(VI) adsorption to the quartz-sand dominated sediments at pH ≤ 4.0. The new ternary (quartz/goethite/kaolinite) CA-SCM provides excellent predictions. The contributions of quartz-sand, kaolinite, and goethite to U(VI) adsorption and the potential influences of dissolved Al, Si, and Fe are also discussed.
Modelling of C2 addition route to the formation of C60
Khan, Sabih D
2016-01-01
To understand the phenomenon of fullerene growth during its synthesis, an attempt is made to model a minimum energy growth route using a semi-empirical quantum mechanics code. C2 addition leading to C60 was modelled and three main routes, i.e. cyclic ring growth, pentagon and fullerene road, were studied. The growth starts with linear chains and, at n = 10, ring structures begins to dominate. The rings continue to grow and, at some point n > 30, they transform into close-cage fullerenes and the growth is shown to progress by the fullerene road until C60 is formed. The computer simulations predict a transition from a C38 ring to fullerene. Other growth mechanisms could also occur in the energetic environment commonly encountered in fullerene synthesis, but our purpose was to identify a minimal energy route which is the most probable structure. Our results also indicate that, at n = 20, the corannulene structure is energetically more stable than the corresponding fullerene and graphene sheet, however a ring str...
Dimas, Leon S; Buehler, Markus J
2014-07-07
Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.
Generalized additive models reveal the intrinsic complexity of wood formation dynamics.
Cuny, Henri E; Rathgeber, Cyrille B K; Kiessé, Tristan Senga; Hartmann, Felix P; Barbeito, Ignacio; Fournier, Meriem
2013-04-01
The intra-annual dynamics of wood formation, which involves the passage of newly produced cells through three successive differentiation phases (division, enlargement, and wall thickening) to reach the final functional mature state, has traditionally been described in conifers as three delayed bell-shaped curves followed by an S-shaped curve. Here the classical view represented by the 'Gompertz function (GF) approach' was challenged using two novel approaches based on parametric generalized linear models (GLMs) and 'data-driven' generalized additive models (GAMs). These three approaches (GFs, GLMs, and GAMs) were used to describe seasonal changes in cell numbers in each of the xylem differentiation phases and to calculate the timing of cell development in three conifer species [Picea abies (L.), Pinus sylvestris L., and Abies alba Mill.]. GAMs outperformed GFs and GLMs in describing intra-annual wood formation dynamics, showing two left-skewed bell-shaped curves for division and enlargement, and a right-skewed bimodal curve for thickening. Cell residence times progressively decreased through the season for enlargement, whilst increasing late but rapidly for thickening. These patterns match changes in cell anatomical features within a tree ring, which allows the separation of earlywood and latewood into two distinct cell populations. A novel statistical approach is presented which renews our understanding of xylogenesis, a dynamic biological process in which the rate of cell production interplays with cell residence times in each developmental phase to create complex seasonal patterns.
Healy, Martha F; Speroni, Karen Gabel; Eugenio, Kenneth R; Murphy, Patricia M
2012-04-01
Because of the renal elimination and increased risk for bleeding events at supratherapeutic doses of eptifibatide, the manufacturer recommends dosing adjustment in patients with renal dysfunction. Methods commonly used to estimate renal dysfunction in hospital settings may be inconsistent with those studied and recommended by the manufacturer. To compare hypothetical renal dosing adjustments of eptifibatide using both the recommended method and several other commonly used formulas for estimating kidney function. Sex, age, weight, height, serum creatinine, and estimated glomerular filtration rate (eGFR) were obtained retrospectively from the records of patients who received eptifibatide during a 12-month period. Renal dosing decisions were determined for each patient based on creatinine clearance (CrCl) estimates via the Cockcroft-Gault formula (CG) with actual body weight (ABW), ideal body weight (IBW) or adjusted weight (ADJW), and eGFR from the Modification of Diet in Renal Disease formula. Percent agreement and Cohen κ were calculated comparing dosing decisions for each formula to the standard CG-ABW. In this analysis of 179 patients, percent agreement as compared to CG-ABW varied (CG-IBW: 90.50%, CG-ADJW: 95.53%, and eGFR: 93.30%). All κ coefficients were categorized as good. In the 20% of patients receiving an adjusted dose by any of the methods, 68.6% could have received a dose different from that determined using the CG-ABW formula. In the patients with renal impairment (CrCl <50 mL/min) in this study, two thirds would have received an unnecessary 50% dose adjustment discordant from the manufacturer's recommendation. Because failure to adjust eptifibatide doses in patients with renal impairment has led to increased bleeding events, practitioners may be inclined to err on the side of caution. However, studies have shown that suboptimal doses of eptifibatide lead to suboptimal outcomes. Therefore, correct dosing of eptifibatide is important to both patient
Directory of Open Access Journals (Sweden)
Haiyan Lang
2016-07-01
Conclusion: According to the syndrome differentiation criteria for disease-syndrome combined model of ITP, the APS-injected animal model of ITP replicated through the passive immune modeling method without additional conditions possesses the characteristics of disease-syndrome combined model. It provides an ideal tool for the development of traditional Chinese medicine pharmacology experiment.
Berends, Tijn; van de Wal, Roderik; de Boer, Bas; Bradley, Sarah
2016-04-01
ANICE is a 3-D ice-sheet-shelf model, which simulates ice dynamics on the continental scale. It uses a combination of the SIA and SSA approximations and here it is forced with benthic δ18O records using an inverse routine. It is coupled to SELEN, a model, which solves the gravitationally self-consistent sea-level equation and the solid earth deformation of a spherically symmetrical rotating Maxwell visco-elastic earth, accounting for all major GIA effects. The coupled ANICE-SELEN model thus captures ice-sea-level feedbacks and can be used to accurately simulate variations in local relative sea-level over geological time scales. In this study it is used to investigate the mass loss of the Laurentide ice-sheet during the last deglaciation, accounting in particular for the presence of the proglacial Lake Agassiz by way of its GIA effects and its effect on the ice sheet itself. We show that the mass of the water can have a significant effect on local relative sea-level through the same mechanisms as the ice-sheet - by perturbing the geoid and by deforming the solid earth. In addition we show that calving of the ice-shelf onto the lake could have had a strong influence on the behaviour of the deglaciation. In particular, when allowing lake calving, the ice-sheet retreats rapidly over the deepening bed of Hudson Bay during the deglaciation, resulting in a narrow ice dam over Hudson Strait. This dam collapses around 8.2 Kyr causing a global sea level rise of approximately 1 meter - an observation that agrees well with field data (for example, LaJeunesse and St. Onge, 2008). Without lake calving the model predicts a drainage towards the Arctic ocean in the North.
Otero, Joel J; Vijverman, An; Mommaerts, Maurice Y
2017-09-01
The goal of this study was to identify current European Union regulations governing hospital-based use of fused deposit modeling (FDM), as implemented via desktop three-dimensional (3D) printers. Literature and Internet sources were screened, searching for official documents, regulations/legislation, and views of specialized attorneys or consultants regarding European regulations for 3D printing or additive manufacturing (AM) in a healthcare facility. A detailed review of the latest amendment (2016) of the European Parliament and Council legislation for medical devices and its classification was performed, which has regularly updated published guidelines for medical devices, that are classified by type and duration of patient contact. As expected, regulations increase in accordance with the level (I-III) of classification. Custom-made medical devices are subject to different regulations than those controlling serially mass-produced items, as originally specified in 98/79/EC European Parliament and Council legislation (1993) and again recently amended (2016). Healthcare facilities undertaking in-house custom production are not obliged to fully follow the directives as stipulated, given an exception for this scenario (Article 4.4a, 98/79/EC). Patient treatment and diagnosis with the aid of customized 3D printing in a healthcare facility can be performed without fully meeting the European Parliament and Council legislation if the materials used are ISO 10993 certified and article 4.4a applies. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Eddy Current Tomography Based on a Finite Difference Forward Model with Additive Regularization
Trillon, A.; Girard, A.; Idier, J.; Goussard, Y.; Sirois, F.; Dubost, S.; Paul, N.
2010-02-01
Eddy current tomography is a nondestructive evaluation technique used for characterization of metal components. It is an inverse problem acknowledged as difficult to solve since it is both ill-posed and nonlinear. Our goal is to derive an inversion technique with improved tradeoff between quality of the results, computational requirements and ease of implementation. This is achieved by fully accounting for the nonlinear nature of the forward problem by means of a system of bilinear equations obtained through a finite difference modeling of the problem. The bilinear character of equations with respect to the electric field and the relative conductivity is taken advantage of through a simple contrast source inversion-like scheme. The ill-posedness is dealt with through the addition of regularization terms to the criterion, the form of which is determined according to computational constraints and the piecewise constant nature of the medium. Therefore an edge-preserving functional is selected. The performance of the resulting method is illustrated using 2D synthetic data examples.
Enhancement of colour stability of anthocyanins in model beverages by gum arabic addition.
Chung, Cheryl; Rojanasasithara, Thananunt; Mutilangi, William; McClements, David Julian
2016-06-15
This study investigated the potential of gum arabic to improve the stability of anthocyanins that are used in commercial beverages as natural colourants. The degradation of purple carrot anthocyanin in model beverage systems (pH 3.0) containing L-ascorbic acid proceeded with a first-order reaction rate during storage (40 °C for 5 days in light). The addition of gum arabic (0.05-5.0%) significantly enhanced the colour stability of anthocyanin, with the most stable systems observed at intermediate levels (1.5%). A further increase in concentration (>1.5%) reduced its efficacy due to a change in the conformation of the gum arabic molecules that hindered their exposure to the anthocyanins. Fluorescence quenching measurements showed that the anthocyanin could have interacted with the glycoprotein fractions of the gum arabic through hydrogen bonding, resulting in enhanced stability. Overall, this study provides valuable information about enhancing the stability of anthocyanins in beverage systems using natural ingredients.
Influence of the heterogeneous reaction HCl + HOCl on an ozone hole model with hydrocarbon additions
Elliott, Scott; Cicerone, Ralph J.; Turco, Richard P.; Drdla, Katja; Tabazadeh, Azadeh
1994-02-01
Injection of ethane or propane has been suggested as a means for reducing ozone loss within the Antarctic vortex because alkanes can convert active chlorine radicals into hydrochloric acid. In kinetic models of vortex chemistry including as heterogeneous processes only the hydrolysis and HCl reactions of ClONO2 and N2O5, parts per billion by volume levels of the light alkanes counteract ozone depletion by sequestering chlorine atoms. Introduction of the surface reaction of HCl with HOCl causes ethane to deepen baseline ozone holes and generally works to impede any mitigation by hydrocarbons. The increased depletion occurs because HCl + HOCl can be driven by HOx radicals released during organic oxidation. Following initial hydrogen abstraction by chlorine, alkane breakdown leads to a net hydrochloric acid activation as the remaining hydrogen atoms enter the photochemical system. Lowering the rate constant for reactions of organic peroxy radicals with ClO to 10-13 cm3 molecule-1 s-1 does not alter results, and the major conclusions are insensitive to the timing of the ethane additions. Ignoring the organic peroxy radical plus ClO reactions entirely restores remediation capabilities by allowing HOx removal independent of HCl. Remediation also returns if early evaporation of polar stratospheric clouds leaves hydrogen atoms trapped in aldehyde intermediates, but real ozone losses are small in such cases.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Exposure as Duration and Distance in Telematics Motor Insurance Using Generalized Additive Models
Directory of Open Access Journals (Sweden)
Jean-Philippe Boucher
2017-09-01
Full Text Available In Pay-As-You-Drive (PAYD automobile insurance, the premium is fixed based on the distance traveled, while in usage-based insurance (UBI the driving patterns of the policyholder are also considered. In those schemes, drivers who drive more pay a higher premium compared to those with the same characteristics who drive only occasionally, because the former are more exposed to the risk of accident. In this paper, we analyze the simultaneous effect of the distance traveled and exposure time on the risk of accident by using Generalized Additive Models (GAM. We carry out an empirical application and show that the expected number of claims (1 stabilizes once a certain number of accumulated distance-driven is reached and (2 it is not proportional to the duration of the contract, which is in contradiction to insurance practice. Finally, we propose to use a rating system that takes into account simultaneously exposure time and distance traveled in the premium calculation. We think that this is the trend the automobile insurance market is going to follow with the eruption of telematics data.
Energy Technology Data Exchange (ETDEWEB)
Hilbert, Jacqueline; Berg, Holger (comps.)
2008-04-15
At the end of the year 2006, France proposed the introduction of a 'climatic tariff' into the discussion of the international climatic protection. The 'climatic tariff' shall adjust extra costs, which result from the domestic production by means of environmental protection instruments and to which the import goods are not exposed, with import/export compensatory payments in the form of import duties and/or taxes on import goods. The introduction of an import/export compensatory payment system aims to load imported goods equivalent to domestic products in order to adjust competitive disadvantages. In the contribution under consideration the authors report on possibilities and problems of design for an import/export tax compensatory. The authors examine the validity of the measures of import/export compensation from legal view the World Trade Organization (Geneva, Switzerland) based on the General Agreement on Tariffs and Trade.
A complete generalized adjustment criterion
Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.
2015-01-01
Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent confound
A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
Dao, Ngocanh
2014-04-03
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Energy Technology Data Exchange (ETDEWEB)
Morais, Keli Cristiane Correia; Ribeiro, Robert Luis Lara; Santos, Kassiana Ribeiro dos; Mariano, Andre Bellin [Mariano Center for Research and Development of Sustainable Energy (NPDEAS), Curitiba, PR (Brazil); Vargas, Jose Viriato Coelho [Departament of Mechanical Engineering, Federal University of Parana (UFPR) Curitiba, PR (Brazil)
2010-07-01
The Brazilian National Program for Bio fuel Production has been encouraging diversification of feedstock for biofuel production. One of the most promising alternatives is the use of microalgae biomass for biofuel production. The cultivation of microalgae is conducted in aquatic systems, therefore microalgae oil production does not compete with agricultural land. Microalgae have greater photosynthetic efficiency than higher plants and are efficient fixing CO{sub 2}. The challenge is to reduce production costs, which can be minimized by increasing productivity and oil biomass. Aiming to increase the production of microalgae biomass, mixotrophic cultivation, with the addition of glycerol has been shown to be very promising. During the production of biodiesel from microalgae there is availability of glycerol as a side product of the transesterification reaction, which could be used as organic carbon source for microalgae mixotrophic growth, resulting in increased biomass productivity. In this paper, to study the effect of glycerol in experimental conditions, the batch culture of the diatom Phaeodactylum tricornutum was performed in a 2-liter flask in a temperature and light intensity controlled room. During 16 days of cultivation, the number of cells per ml was counted periodically in a Neubauer chamber. The calculation of dry biomass in the control experiment (without glycerol) was performed every two days by vacuum filtration. In the dry biomass mixotrophic experiment with glycerol concentration of 1.5 M, the number of cells was assessed similarly in the 10{sup th} and 14{sup th} days of cultivation. Through a volume element methodology, a mathematical model was written to calculate the microalgae growth rate. It was used an equation that describes the influence of irradiation and concentration of nutrients in the growth of microalgae. A simulation time of 16 days was used in the computations, with initial concentration of 0.1 g l{sup -1}. In order to compare
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2016-07-01
This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Energy Technology Data Exchange (ETDEWEB)
Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1989-12-31
The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.
Utilization of sulfate additives in biomass combustion: fundamental and modeling aspects
DEFF Research Database (Denmark)
Wu, Hao; Jespersen, Jacob Boll; Grell, Morten Nedergaard
2013-01-01
Sulfates, such as ammonium sulfate, aluminum sulfate and ferric sulfate, are effective additives for converting the alkali chlorides released from biomass combustion to the less harmful alkali sulfates. Optimization of the use of these additives requires knowledge on their decomposition rate and ...
Mohammad Lagzian; Shamsoddin Nazemi; Fatemeh Dadmand
2012-01-01
Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS) in an Iranian Univ...
MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I
Directory of Open Access Journals (Sweden)
Sit B.
2009-08-01
Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.
Directory of Open Access Journals (Sweden)
Gianola Daniel
2007-09-01
Full Text Available Abstract Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA may provide an avenue for structuring (covariance matrices, thus reducing the number of parameters needed for describing (codispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (covariance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model.
2008-09-01
West Model Adult ICU Model Holt-Winters’ Expo Smoothing Model Month Total Req. Acuity Adj. FTEs Leve l Tren d Seaso n Predicted FTE’s...Surgical Model Medical Total Req. FTE’s Based Holt-Winters’ Expo Smoothing Model Month Workload Acuity/FTE’ s Level Tren d Season Predicte d
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2016-07-31
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The result indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Directory of Open Access Journals (Sweden)
Fernando Augusto de Souza
2014-07-01
Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.
Kor-Anantakul, Ounjai; Suntharasaj, Thitima; Suwanrath, Chitkasaem; Hanprasertpong, Tharangrut; Pranpanus, Savitree; Pruksanusak, Ninlapa; Janwadee, Suthiraporn; Geater, Alan
2017-01-01
To establish normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in southern Thai women, and to compare these reference levels with Caucasian-specific and northern Thai models. A cross-sectional study was conducted in 1,150 normal singleton pregnancy women to determine serum pregnancy-associated plasma protein-A (PAPP-A) and free β-human chorionic gonadotropin (β-hCG) concentrations in women from southern Thailand. The predicted median values were compared with published equations for Caucasians and northern Thai women. The best-fitting regression equations for the expected median serum levels of PAPP-A (mIU/L) and free β- hCG (ng/mL) according to maternal weight (Wt in kg) and gestational age (GA in days) were: [Formula: see text] and [Formula: see text] Both equations were selected with a statistically significant contribution (p< 0.05). Compared with the Caucasian model, the median values of PAPP-A were higher and the median values of free β-hCG were lower in the southern Thai women. And compared with the northern Thai models, the median values of both biomarkers were lower in southern Thai women. The study has successfully developed maternal-weight- and gestational-age-adjusted median normative models to convert the PAPP-A and free β-hCG levels into their Multiple of Median equivalents in southern Thai women. These models confirmed ethnic differences.
Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally
2011-01-01
The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.
Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally
2011-01-01
The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.
Directory of Open Access Journals (Sweden)
Alexandra Kuznetsova
2016-01-01
Full Text Available Adjusting of wind input source term in numerical model WAVEWATCH III for the middle-sized water body is reported. For this purpose, the field experiment on Gorky Reservoir is carried out. Surface waves are measured along with the parameters of the airflow. The measurement of wind speed in close proximity to the water surface is performed. On the basis of the experimental results, the parameterization of the drag coefficient depending on the 10 m wind speed is proposed. This parameterization is used in WAVEWATCH III for the adjusting of the wind input source term within WAM 3 and Tolman and Chalikov parameterizations. The simulation of the surface wind waves within tuned to the conditions of the middle-sized water body WAVEWATCH III is performed using three built-in parameterizations (WAM 3, Tolman and Chalikov, and WAM 4 and adjusted wind input source term parameterizations. Verification of the applicability of the model to the middle-sized reservoir is performed by comparing the simulated data with the results of the field experiment. It is shown that the use of the proposed parameterization CD(U10 improves the agreement in the significant wave height HS from the field experiment and from the numerical simulation.
High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-06-01
Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.
High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-03-01
Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.
Institute of Scientific and Technical Information of China (English)
徐庆元; 周小林; 曾志平; 杨小礼
2004-01-01
A new mechanics model, which reveals additional longitudinal force transmission between the continuously welded rails and the bridges, is established on the fact that the influence of the mutual relative displacement among the rail, the sleeper and the beam is taken into account. An example is presented and numerical results are compared. The results show that the additional longitudinal forces calculated with the new model are less than those of the previous, especially in the case of the flexible pier bridges. The new model is also suitable for the analysis of the additional longitudinal force transmission between rails and bridges of ballastless track with small resistance fasteners without taking the sleeper displacement into account, and compared with the ballast bridges, the ballastless bridges have a much stronger additional longitudinal force transmission between the continuously welded rails and the bridges.
The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market
DEFF Research Database (Denmark)
Andersen, Signe Hald; Hansen, Lars Gårn
Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...
Chen, Shi; Liao, Xu; Ma, Hongsheng; Zhou, Longquan; Wang, Xingzhou; Zhuang, Jiancang
2017-04-01
The relative gravimeter, which generally uses zero-length springs as the gravity senor, is still as the first choice in the field of terrestrial gravity measurement because of its efficiency and low-cost. Because the drift rate of instrument can be changed with the time and meter, it is necessary for estimating the drift rate to back to the base or known gravity value stations for repeated measurement at regular hour's interval during the practical survey. However, the campaigned gravity survey for the large-scale region, which the distance of stations is far away from serval or tens kilometers, the frequent back to close measurement will highly reduce the gravity survey efficiency and extremely time-consuming. In this paper, we proposed a new gravity data adjustment method for estimating the meter drift by means of Bayesian statistical interference. In our approach, we assumed the change of drift rate is a smooth function depend on the time-lapse. The trade-off parameters were be used to control the fitting residuals. We employed the Akaike's Bayesian Information Criterion (ABIC) for the estimated these trade-off parameters. The comparison and analysis of simulated data between the classical and Bayesian adjustment show that our method is robust and has self-adaptive ability for facing to the unregularly non-linear meter drift. At last, we used this novel approach to process the realistic campaigned gravity data at the North China. Our adjustment method is suitable to recover the time-varied drift rate function of each meter, and also to detect the meter abnormal drift during the gravity survey. We also defined an alternative error estimation for the inversed gravity value at the each station on the basis of the marginal distribution theory. Acknowledgment: This research is supported by Science Foundation Institute of Geophysics, CEA from the Ministry of Science and Technology of China (Nos. DQJB16A05; DQJB16B07), China National Special Fund for Earthquake
Directory of Open Access Journals (Sweden)
Aschengrau Ann
2005-06-01
Full Text Available Abstract Background The availability of geographic information from cancer and birth defect registries has increased public demands for investigation of perceived disease clusters. Many neighborhood-level cluster investigations are methodologically problematic, while maps made from registry data often ignore latency and many known risk factors. Population-based case-control and cohort studies provide a stronger foundation for spatial epidemiology because potential confounders and disease latency can be addressed. Methods We investigated the association between residence and colorectal, lung, and breast cancer on upper Cape Cod, Massachusetts (USA using extensive data on covariates and residential history from two case-control studies for 1983–1993. We generated maps using generalized additive models, smoothing on longitude and latitude while adjusting for covariates. The resulting continuous surface estimates disease rates relative to the whole study area. We used permutation tests to examine the overall importance of location in the model and identify areas of increased and decreased risk. Results Maps of colorectal cancer were relatively flat. Assuming 15 years of latency, lung cancer was significantly elevated just northeast of the Massachusetts Military Reservation, although the result did not hold when we restricted to residences of longest duration. Earlier non-spatial epidemiology had found a weak association between lung cancer and proximity to gun and mortar positions on the reservation. Breast cancer hot spots tended to increase in magnitude as we increased latency and adjusted for covariates, indicating that confounders were partly hiding these areas. Significant breast cancer hot spots were located near known groundwater plumes and the Massachusetts Military Reservation. Discussion Spatial epidemiology of population-based case-control studies addresses many methodological criticisms of cluster studies and generates new exposure
Energy Technology Data Exchange (ETDEWEB)
Grotjans, H.
1998-04-01
In the current Software Engineering Module (SEM2) three additional test cases have been investigated, as listed in Chapter 2. For all test cases it has been shown that the computed results are grid independent. This has been done by systematic grid refinement studies. The main objective of the current SEM2 was the verification and validation of the new wall function implementation for the k-{epsilon} mode and the SMC-model. Analytical relations and experimental data have been used for comparison of the computational results. The agreement of the results is good. Therefore, the correct implementation of the new wall function has been demonstrated. As the results in this report have shown, a consistent grid refinement can be done for any test case. This is an important improvement for industrial applications, as no model specific requirements must be considered during grid generation. (orig.)
Creating a Climate for Linguistically Responsive Instruction: The Case for Additive Models
Rao, Arthi B.; Morales, P. Zitlali
2015-01-01
As a state with a longstanding tradition of offering bilingual education, Illinois has a legislative requirement for native language instruction in earlier grades through a model called Transitional Bilingual Education (TBE). This model does not truly develop bilingualism, however, but rather offers native language instruction to English learners…
Conceptual performance model for deep in situ recycled pavements with cement and bitumen additives
CSIR Research Space (South Africa)
Steyn, WJvdM
2001-10-01
Full Text Available structure is monitored together with environmental parameters. Based on this information, and associated laboratory testing data, a model for the performance of these pavements is currently being developed. The model is currently based on the results of APT...
Bayes linear covariance matrix adjustment
Wilkinson, Darren J
1995-01-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be a...
Directory of Open Access Journals (Sweden)
Aiman Omer
2015-12-01
Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.
Estimation of direct effects for survival data by using the Aalen additive hazards model
DEFF Research Database (Denmark)
Martinussen, T.; Vansteelandt, S.; Gerster, M.
2011-01-01
We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...
Modelling of flame propagation in the gasoline fuelled Wankel rotary engine with hydrogen additives
Fedyanov, E. A.; Zakharov, E. A.; Prikhodkov, K. V.; Levin, Y. V.
2017-02-01
Recently, hydrogen has been considered as an alternative fuel for a vehicles power unit. The Wankel engine is the most suitable to be adapted to hydrogen feeding. A hydrogen additive helps to decrease incompleteness of combustion in the volumes near the apex of the rotor. Results of theoretical researches of the hydrogen additives influence on the flame propagation in the combustion chamber of the Wankel rotary engine are presented. The theoretical research shows that the blend of 70% gasoline with 30% hydrogen could accomplish combustion near the T-apex in the stoichiometric mixture and in lean one. Maps of the flame front location versus the angle of rotor rotation and hydrogen fraction are obtained. Relations of a minimum required amount of hydrogen addition versus the engine speed are shown on the engine modes close to the average city driving cycle. The amount of hydrogen addition that could be injected by the nozzle with different flow sections is calculated in order to analyze the capacity of the feed system.
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.T. Jr.
1979-01-01
Six approaches to devolatilization modeling have been reviewed. Two have been selected for further evaluation: the Vand-type model of Anthony and Howard and the diffusion model of Russel et al. The first of these treats particles under kinetic control only. The second includes some mass transfer control along with kinetic control. Behavior of particles in the SYNTHANE process appears to be in the transition region between kinetic and mass transfer control. Work during the next quarter will focus on the temperature history of average particles in the carbonizer of the SYNTHANE process and on the methods by which the two devolatilization models chosen will be used to describe conversion in the SYNTHANE carbonizer.
Possibilities of Preoperative Medical Models Made by 3D Printing or Additive Manufacturing
Directory of Open Access Journals (Sweden)
Mika Salmi
2016-01-01
Full Text Available Most of the 3D printing applications of preoperative models have been focused on dental and craniomaxillofacial area. The purpose of this paper is to demonstrate the possibilities in other application areas and give examples of the current possibilities. The approach was to communicate with the surgeons with different fields about their needs related preoperative models and try to produce preoperative models that satisfy those needs. Ten different kinds of examples of possibilities were selected to be shown in this paper and aspects related imaging, 3D model reconstruction, 3D modeling, and 3D printing were presented. Examples were heart, ankle, backbone, knee, and pelvis with different processes and materials. Software types required were Osirix, 3Data Expert, and Rhinoceros. Different 3D printing processes were binder jetting and material extrusion. This paper presents a wide range of possibilities related to 3D printing of preoperative models. Surgeons should be aware of the new possibilities and in most cases help from mechanical engineering side is needed.
Possibilities of Preoperative Medical Models Made by 3D Printing or Additive Manufacturing
2016-01-01
Most of the 3D printing applications of preoperative models have been focused on dental and craniomaxillofacial area. The purpose of this paper is to demonstrate the possibilities in other application areas and give examples of the current possibilities. The approach was to communicate with the surgeons with different fields about their needs related preoperative models and try to produce preoperative models that satisfy those needs. Ten different kinds of examples of possibilities were selected to be shown in this paper and aspects related imaging, 3D model reconstruction, 3D modeling, and 3D printing were presented. Examples were heart, ankle, backbone, knee, and pelvis with different processes and materials. Software types required were Osirix, 3Data Expert, and Rhinoceros. Different 3D printing processes were binder jetting and material extrusion. This paper presents a wide range of possibilities related to 3D printing of preoperative models. Surgeons should be aware of the new possibilities and in most cases help from mechanical engineering side is needed. PMID:27433470
Possibilities of Preoperative Medical Models Made by 3D Printing or Additive Manufacturing.
Salmi, Mika
2016-01-01
Most of the 3D printing applications of preoperative models have been focused on dental and craniomaxillofacial area. The purpose of this paper is to demonstrate the possibilities in other application areas and give examples of the current possibilities. The approach was to communicate with the surgeons with different fields about their needs related preoperative models and try to produce preoperative models that satisfy those needs. Ten different kinds of examples of possibilities were selected to be shown in this paper and aspects related imaging, 3D model reconstruction, 3D modeling, and 3D printing were presented. Examples were heart, ankle, backbone, knee, and pelvis with different processes and materials. Software types required were Osirix, 3Data Expert, and Rhinoceros. Different 3D printing processes were binder jetting and material extrusion. This paper presents a wide range of possibilities related to 3D printing of preoperative models. Surgeons should be aware of the new possibilities and in most cases help from mechanical engineering side is needed.
Bursting and spiking due to additional direct and stochastic currents in neuron models
Institute of Scientific and Technical Information of China (English)
Yang Zhuo-Qin; Lu Qi-Shao
2006-01-01
Neurons at rest can exhibit diverse firing activities patterns in response to various external deterministic and random stimuli, especially additional currents. In this paper, neuronal firing patterns from bursting to spiking, induced by additional direct and stochastic currents, are explored in rest states Corresponding to two values of the parameter VK in the Chay neuron system. Three cases are considered by numerical simulation and fast/slow dynamic analysis, in which only the direct current or the stochastic current exists, or the direct and stochastic currents coexist. Meanwhile, several important bursting patterns in neuronal experiments, such as the period-1 "circle/homoclinic" bursting and the integer multiple "fold/homoclinic" bursting with one spike per burst, as well as the transition from integer multiple bursting to period-1 "circle/homoclinic" bursting and that from stochastic "Hopf/homoclinic" bursting to "Hopf/homoclinic" bursting, are investigated in detail.
Trejos, Ana María; Reyes, Lizeth; Bahamon, Marly Johana; Alarcón, Yolima; Gaviria, Gladys
2015-08-01
A study in five Colombian cities in 2006, confirms the findings of other international studies: the majority of HIV-positive children not know their diagnosis, caregivers are reluctant to give this information because they believe that the news will cause emotional distress to the child becoming primary purpose of this study to validate a model of revelation. We implemented a clinical model, referred to as: "DIRE" that hypothetically had normalizing effects on psychological adjustment and adherence to antiretroviral treatment of HIV seropositive children, using a quasi-experimental design. Test were administered (questionnaire to assess patterns of disclosure and non-disclosure of the diagnosis of VIH/SIDA on children in health professionals and participants caregivers, Family Apgar, EuroQol EQ- 5D, MOS Social Support Survey Questionnaire Information treatment for VIH/SIDA and child Symptom Checklist CBCL/6-18 adapted to Latinos) before and after implementation of the model to 31 children (n: 31), 30 caregivers (n: 30) and 41 health professionals. Data processing was performed using the Statistical Package for the Social Science version 21 by applying parametric tests (Friedman) and nonparametric (t Student). No significant differences in adherence to treatment (p=0.392), in the psychological adjustment were found positive significant differences at follow-ups compared to baseline 2 weeks (p: 0.001), 3 months (p: 0.000) and 6 months (p: 0.000). The clinical model demonstrated effectiveness in normalizing of psychological adjustment and maintaining treatment compliance. The process also generated confidence in caregivers and health professionals in this difficult task.
Yi, Yujun; Sun, Jie; Zhang, Shanghong; Yang, Zhifeng
2016-05-01
To date, a wide range of models have been applied to evaluate aquatic habitat suitability. In this study, three models, including the expert knowledge-based preference curve model (PCM), data-driven fuzzy logic model (DDFL), and generalized additive model (GAM), are used on a common data set to compare their effectiveness and accuracy. The true skill statistic (TSS) and the area under the receiver operating characteristics curve (AUC) are used to evaluate the accuracy of the three models. The results indicate that the two data-based methods (DDFL and GAM) yield better accuracy than the expert knowledge-based PCM, and the GAM yields the best accuracy. There are minor differences in the suitable ranges of the physical habitat variables obtained from the three models. The hydraulic habitat suitability index (HHSI) calculated by the PCM is the largest, followed by the DDFL and then the GAM. The results illustrate that data-based models can describe habitat suitability more objectively and accurately when there are sufficient data. When field data are lacking, combining expertise with data-based models is recommended. When field data are difficult to obtain, an expert knowledge-based model can be used as a replacement for the data-based methods.
Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model
Dimitrios Moustakas; Michael Mezzio; Branden R Rodriguez; Mic Andre Constable; Mulligan, Margaret E.; Voura, Evelyn B.
2015-01-01
The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of g...
Quanren Zeng; Zhenhai Xu; Yankang Tian; Yi Qin
2016-01-01
The development speed and application range of the additive manufacturing (AM) processes, such as selective laser melting (SLM), laser metal deposition (LMD) or laser-engineering net shaping (LENS), are ever-increasing in modern advanced manufacturing field for rapid manufacturing, tooling repair or surface enhancement of the critical metal components. LMD is based on a kind of directed energy deposition (DED) technology which ejects a strand of metal powders into a moving molten pool caused ...
Ten-year-old children strategies in mental addition: A counting model account.
Thevenot, Catherine; Barrouillet, Pierre; Castel, Caroline; Uittenhove, Kim
2016-01-01
For more than 30 years, it has been admitted that individuals from the age of 10 mainly retrieve the answer of simple additions from long-term memory, at least when the sum does not exceed 10. Nevertheless, recent studies challenge this assumption and suggest that expert adults use fast, compacted and unconscious procedures in order to solve very simple problems such as 3+2. If this is true, automated procedures should be rooted in earlier strategies and therefore observable in their non-compacted form in children. Thus, contrary to the dominant theoretical position, children's behaviors should not reflect retrieval. This is precisely what we observed in analyzing the responses times of a sample of 42 10-year-old children who solved additions with operands from 1 to 9. Our results converge towards the conclusion that 10-year-old children still use counting procedures in order to solve non-tie problems involving operands from 2 to 4. Moreover, these counting procedures are revealed whatever the expertise of children, who differ only in their speed of execution. Therefore and contrary to the dominant position in the literature according to which children's strategies evolve from counting to retrieval, the key change in development of mental addition solving appears to be a shift from slow to quick counting procedures.
Energy Technology Data Exchange (ETDEWEB)
Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Celina, Mathias C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Giron, Nicholas Henry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Long, Kevin Nicholas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Russick, Edward M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
We are developing computational models to help understand manufacturing processes, final properties and aging of structural foam, polyurethane PMDI. Th e resulting model predictions of density and cure gradients from the manufacturing process will be used as input to foam heat transfer and mechanical models. BKC 44306 PMDI-10 and BKC 44307 PMDI-18 are the most prevalent foams used in structural parts. Experiments needed to parameterize models of the reaction kinetics and the equations of motion during the foam blowing stages were described for BKC 44306 PMDI-10 in the first of this report series (Mondy et al. 2014). BKC 44307 PMDI-18 is a new foam that will be used to make relatively dense structural supports via over packing. It uses a different catalyst than those in the BKC 44306 family of foams; hence, we expect that the reaction kineti cs models must be modified. Here we detail the experiments needed to characteriz e the reaction kinetics of BKC 44307 PMDI-18 and suggest parameters for the model based on these experiments. In additi on, the second part of this report describes data taken to provide input to the preliminary nonlinear visco elastic structural response model developed for BKC 44306 PMDI-10 foam. We show that the standard cu re schedule used by KCP does not fully cure the material, and, upon temperature elevation above 150°C, oxidation or decomposition reactions occur that alter the composition of the foam. These findings suggest that achieving a fully cured foam part with this formulation may be not be possible through therma l curing. As such, visco elastic characterization procedures developed for curing thermosets can provide only approximate material properties, since the state of the material continuously evolves during tests.
Energy Technology Data Exchange (ETDEWEB)
Rao, Rekha R.; Celina, Mathias C.; Giron, Nicholas Henry; Long, Kevin Nicholas; Russick, Edward M.
2015-01-01
We are developing computational models to help understand manufacturing processes, final properties and aging of structural foam, polyurethane PMDI. Th e resulting model predictions of density and cure gradients from the manufacturing process will be used as input to foam heat transfer and mechanical models. BKC 44306 PMDI-10 and BKC 44307 PMDI-18 are the most prevalent foams used in structural parts. Experiments needed to parameterize models of the reaction kinetics and the equations of motion during the foam blowing stages were described for BKC 44306 PMDI-10 in the first of this report series (Mondy et al. 2014). BKC 44307 PMDI-18 is a new foam that will be used to make relatively dense structural supports via over packing. It uses a different catalyst than those in the BKC 44306 family of foams; hence, we expect that the reaction kineti cs models must be modified. Here we detail the experiments needed to characteriz e the reaction kinetics of BKC 44307 PMDI-18 and suggest parameters for the model based on these experiments. In additi on, the second part of this report describes data taken to provide input to the preliminary nonlinear visco elastic structural response model developed for BKC 44306 PMDI-10 foam. We show that the standard cu re schedule used by KCP does not fully cure the material, and, upon temperature elevation above 150 o C, oxidation or decomposition reactions occur that alter the composition of the foam. These findings suggest that achieving a fully cured foam part with this formulation may be not be possible through therma l curing. As such, visco elastic characterization procedures developed for curing thermosets can provide only approximate material properties, since the state of the material continuously evolves during tests.
Koivunoro, H; Schmitz, T; Hippeläinen, E; Liu, Y-H; Serén, T; Kotiluoto, P; Auterinen, I; Savolainen, S
2014-06-01
The mixed neutron-photon beam of FiR 1 reactor is used for boron-neutron capture therapy (BNCT) in Finland. A beam model has been defined for patient treatment planning and dosimetric calculations. The neutron beam model has been validated with an activation foil measurements. The photon beam model has not been thoroughly validated against measurements, due to the fact that the beam photon dose rate is low, at most only 2% of the total weighted patient dose at FiR 1. However, improvement of the photon dose detection accuracy is worthwhile, since the beam photon dose is of concern in the beam dosimetry. In this study, we have performed ionization chamber measurements with multiple build-up caps of different thickness to adjust the calculated photon spectrum of a FiR 1 beam model.
Adjusting Population Risk for Functional Health Status.
Fuller, Richard L; Hughes, John S; Goldfield, Norbert I
2016-04-01
Risk adjustment accounts for differences in population mix by reducing the likelihood of enrollee selection by managed care plans and providing a correction to otherwise biased reporting of provider or plan performance. Functional health status is not routinely included within risk-adjustment methods, but is believed by many to be a significant enhancement to risk adjustment for complex enrollees and patients. In this analysis a standardized measure of functional health was created using 3 different source functional assessment instruments submitted to the Medicare program on condition of payment. The authors use a 5% development sample of Medicare claims from 2006 and 2007, including functional health assessments, and develop a model of functional health classification comprising 9 groups defined by the interaction of self-care, mobility, incontinence, and cognitive impairment. The 9 functional groups were used to augment Clinical Risk Groups, a diagnosis-based patient classification system, and when using a validation set of 100% of Medicare data for 2010 and 2011, this study found the use of the functional health module to improve the fit of observed enrollee cost, measured by the R(2) statistic, by 5% across all Medicare enrollees. The authors observed complex nonlinear interactions across functional health domains when constructing the model and caution that functional health status needs careful handling when used for risk adjustment. The addition of functional health status within existing risk-adjustment models has the potential to improve equitable resource allocation in the financing of care costs for more complex enrollees if handled appropriately. (Population Health Management 2016;19:136-144).
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Harrison, Sean; Tilling, Kate; Turner, Emma L; Lane, J Athene; Simpkin, Andrew; Davis, Michael; Donovan, Jenny; Hamdy, Freddie C; Neal, David E; Martin, Richard M
2016-12-01
Previous studies indicate a possible inverse relationship between prostate-specific antigen (PSA) and body mass index (BMI), and a positive relationship between PSA and age. We investigated the associations between age, BMI, PSA, and screen-detected prostate cancer to determine whether an age-BMI-adjusted PSA model would be clinically useful for detecting prostate cancer. Cross-sectional analysis nested within the UK ProtecT trial of treatments for localized cancer. Of 18,238 men aged 50-69 years, 9,457 men without screen-detected prostate cancer (controls) and 1,836 men with prostate cancer (cases) met inclusion criteria: no history of prostate cancer or diabetes; PSA BMI between 15 and 50 kg/m(2). Multivariable linear regression models were used to investigate the relationship between log-PSA, age, and BMI in all men, controlling for prostate cancer status. In the 11,293 included men, the median PSA was 1.2 ng/ml (IQR: 0.7-2.6); mean age 61.7 years (SD 4.9); and mean BMI 26.8 kg/m(2) (SD 3.7). There were a 5.1% decrease in PSA per 5 kg/m(2) increase in BMI (95% CI 3.4-6.8) and a 13.6% increase in PSA per 5-year increase in age (95% CI 12.0-15.1). Interaction tests showed no evidence for different associations between age, BMI, and PSA in men above and below 3.0 ng/ml (all p for interaction >0.2). The age-BMI-adjusted PSA model performed as well as an age-adjusted model based on National Institute for Health and Care Excellence (NICE) guidelines at detecting prostate cancer. Age and BMI were associated with small changes in PSA. An age-BMI-adjusted PSA model is no more clinically useful for detecting prostate cancer than current NICE guidelines. Future studies looking at the effect of different variables on PSA, independent of their effect on prostate cancer, may improve the discrimination of PSA for prostate cancer.
Finite element modeling of deposition of ceramic material during SLM additive manufacturing
Directory of Open Access Journals (Sweden)
Chen Qiang
2016-01-01
Full Text Available A three dimensional model for material deposition in Selective Laser Melting (SLM with application to Al2O3-ZrO2 eutectic ceramic is presented. As the material is transparent to laser, dopants are added to increase the heat absorption efficiency. Based on Beer-Lambert law, a volumetric heat source model taking into account the material absorption is derived. The Level Set method with multiphase homogenization is used to track the shape of deposed bead and the thermodynamic is coupled to calculate the melting-solidification path. The shrinkage during consolidation from powder to compact medium is modeled by a compressible Newtonian constitutive law. A semi-implicit formulation of surface tension is used, which permits a stable resolution to capture the gas-liquid interface. The formation of droplets is obtained and slight waves of melt pool are observed. The influence of different process parameters on temperature distribution, melt pool profiles and bead shapes is discussed.
Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy
2015-04-01
There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo
2016-07-01
Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective.
Stability properties of the Goodwin-Smith oscillator model with additional feedback
Taghvafard, Hadi; Proskurnikov, Anton V.; Cao, Ming
2016-01-01
The Goodwin oscillator is a simple yet instructive mathematical model, describing a wide range of self-controlled biological and biochemical processes, among them are self-inhibitory metabolic pathways and genetic circadian clocks. One of its most important applications is concerned with the hormona
Additional disinfection with a modified salt solution in a root canal model
S.V. van der Waal; C.A.M. Oonk; S.H. Nieman; P.R. Wesselink; J.J. de Soet; W. Crielaard
2015-01-01
Objectives The aim of this study is to investigate the disinfecting properties of a modified salt solution (MSS) and calcium hydroxide (Ca(OH)2) in a non-direct-contact ex-vivo model. Methods Seventy-four single-canal roots infected with Enterococcus faecalis were treated with 1% sodium hypochlorite
Addition of a 5/cm Spectral Resolution Band Model Option to LOWTRAN5.
1980-10-01
Ocean Background Radiance Measurements ............. . ........................ 2-19 20. Comparison of LOWTRAN5 (Band Model) Predictions to University...1630 110 XHK4IN CEO 1640 IF (HMIN.LE.O.0) GO TO 120 CEO 1650 CALL POINT (XYN,N, IPIP ) CEO 1660 JMIN-NCEO 1670 TX3-TX(9) CEO 1680 IF (J2.EQ.N.OR.J1
Additive gamma frailty models with applications to competing risks in related individuals
DEFF Research Database (Denmark)
Eriksson, Frank; Scheike, Thomas
2015-01-01
Epidemiological studies of related individuals are often complicated by the fact that follow-up on the event type of interest is incomplete due to the occurrence of other events. We suggest a class of frailty models with cause-specific hazards for correlated competing events in related individuals...
Modeling the Stereoselectivity of the β-Amino Alcohol Promoted Addition of Dialkylzinc to Aldehydes
DEFF Research Database (Denmark)
Rasmussen, Torben; Norrby, Per-Ola
2003-01-01
The title reaction has been modeled by a Q2MM force field, allowing for rapid evaluation of several thousand TS conformations. For ten experimental systems taken from the literature, the pathway leading to the major enantiomer has been identified. Furthermore, several possible contributions to th...
Modeling the Stereoselectivity of the β-Amino Alcohol Promoted Addition of Dialkylzinc to Aldehydes
DEFF Research Database (Denmark)
Rasmussen, Torben; Norrby, Per-Ola
2003-01-01
The title reaction has been modeled by a Q2MM force field, allowing for rapid evaluation of several thousand TS conformations. For ten experimental systems taken from the literature, the pathway leading to the major enantiomer has been identified. Furthermore, several possible contributions...
Stability properties of the Goodwin-Smith oscillator model with additional feedback
Taghvafard, Hadi; Proskurnikov, Anton V.; Cao, Ming
The Goodwin oscillator is a simple yet instructive mathematical model, describing a wide range of self-controlled biological and biochemical processes, among them are self-inhibitory metabolic pathways and genetic circadian clocks. One of its most important applications is concerned with the
Can an energy balance model provide additional constraints on how to close the energy imbalance?
Wohlfahrt, Georg; Widmoser, Peter
2013-02-15
Elucidating the causes for the energy imbalance, i.e. the phenomenon that eddy covariance latent and sensible heat fluxes fall short of available energy, is an outstanding problem in micrometeorology. This paper tests the hypothesis that the full energy balance, through incorporation of additional independent measurements which determine the driving forces of and resistances to energy transfer, provides further insights into the causes of the energy imbalance and additional constraints on energy balance closure options. Eddy covariance and auxiliary data from three different biomes were used to test five contrasting closure scenarios. The main result of our study is that except for nighttime, when fluxes were low and noisy, the full energy balance generally did not contain enough information to allow further insights into the causes of the imbalance and to constrain energy balance closure options. Up to four out of the five tested closure scenarios performed similarly and in up to 53% of all cases all of the tested closure scenarios resulted in plausible energy balance values. Our approach may though provide a sensible consistency check for eddy covariance energy flux measurements.
In-line monitoring and reverse 3D model reconstruction in additive manufacturing
DEFF Research Database (Denmark)
Pedersen, David Bue; Hansen, Hans Nørgaard; Nielsen, Jakob Skov
2010-01-01
Additive manufacturing allows for close-to unrestrained geometrical freedom in part design. The ability to manufacture geometries of such complexity is however limited by the fact that it proves difficult to verify tolerances of these parts. Tolerancs of featuress that are inaccessible with tradi......D printing (3DP), or Selective Laser Sintering (SLS) equipment. The system will be implemented and tested on a 3DP machine with modifications developed at the author's university.......Additive manufacturing allows for close-to unrestrained geometrical freedom in part design. The ability to manufacture geometries of such complexity is however limited by the fact that it proves difficult to verify tolerances of these parts. Tolerancs of featuress that are inaccessible...... with traditional measuring equipment such as Coordinate Measurement Machines (CMM's) can not easily be verified. This paradox is addresses by the proposal of an in-line reverse engineering and 3D reconstruction method that alows for a true to scale reconstruction of a part that is being additivelymanufactures on 3...
Directory of Open Access Journals (Sweden)
Li Karen
2008-12-01
Full Text Available Abstract Background Widely used substitution models for proteins, such as the Jones-Taylor-Thornton (JTT or Whelan and Goldman (WAG models, are based on empirical amino acid interchange matrices estimated from databases of protein alignments that incorporate the average amino acid frequencies of the data set under examination (e.g JTT + F. Variation in the evolutionary process between sites is typically modelled by a rates-across-sites distribution such as the gamma (Γ distribution. However, sites in proteins also vary in the kinds of amino acid interchanges that are favoured, a feature that is ignored by standard empirical substitution matrices. Here we examine the degree to which the pattern of evolution at sites differs from that expected based on empirical amino acid substitution models and evaluate the impact of these deviations on phylogenetic estimation. Results We analyzed 21 large protein alignments with two statistical tests designed to detect deviation of site-specific amino acid distributions from data simulated under the standard empirical substitution model: JTT+ F + Γ. We found that the number of states at a given site is, on average, smaller and the frequencies of these states are less uniform than expected based on a JTT + F + Γ substitution model. With a four-taxon example, we show that phylogenetic estimation under the JTT + F + Γ model is seriously biased by a long-branch attraction artefact if the data are simulated under a model utilizing the observed site-specific amino acid frequencies from an alignment. Principal components analyses indicate the existence of at least four major site-specific frequency classes in these 21 protein alignments. Using a mixture model with these four separate classes of site-specific state frequencies plus a fifth class of global frequencies (the JTT + cF + Γ model, significant improvements in model fit for real data sets can be achieved. This simple mixture model also reduces the long
Maloney, Kelly O.; Schmid, Matthias; Weller, Donald E.
2012-01-01
Issues with ecological data (e.g. non-normality of errors, nonlinear relationships and autocorrelation of variables) and modelling (e.g. overfitting, variable selection and prediction) complicate regression analyses in ecology. Flexible models, such as generalized additive models (GAMs), can address data issues, and machine learning techniques (e.g. gradient boosting) can help resolve modelling issues. Gradient boosted GAMs do both. Here, we illustrate the advantages of this technique using data on benthic macroinvertebrates and fish from 1573 small streams in Maryland, USA.
McKinney, Cliff; Renk, Kimberly
2008-01-01
Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…
Energy Technology Data Exchange (ETDEWEB)
Alves Junior, Iremar; Santos, Lucas dos; Potiens, Maria da Penha A.; Vivolo, Vitor, E-mail: iremarjr@usp.b, E-mail: lucas.se@usp.b, E-mail: mppalbu@ipen.b, E-mail: vivolo@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-10-26
This paper dimensioned the filter wheel components and the adequacy of additional filtrations for the implantation of the OTW automated system with complete replacement of previous used filtration by new set of machine-made filters to be used as the qualities already implanted at the Instrument Calibration Laboratory of the IPEN, Sao Paulo, Brazil. In the sequence, it was performed the measurements of kerma i the air in each quality to be used as reference values
A Model for Evaluation of Grain Sizes of Aluminum Alloys with Grain Refinement Additions
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Based on the assumption that the nucleation substrates are activated by constitutional undercooling generated by an adjacent grain growth and solute distribution during the initial solidification, a model for calculation of the grain size of aluminum alloys with the grain refinement is developed, where the nucleation is dominated by two parameters, I.e. Growth restriction factor Q and the undercooling parameter P. The growth restriction factor Q is proportional to the initial rate of constitutional undercooling development and can be used directly as a criterion of the grain refinement in the alloys with strong potential nucleation particles. The undercooling parameter P can be regarded as the maximum of constitutional undercooling △TC. For weak potential nucleation particles, the use of RGS would be more accurate. The experimental data of the grain refinement of pure aluminum and AISi7 alloys are coincident predicted results with the model.