WorldWideScience

Sample records for adjustment model based

  1. Capital adjustment cost and bias in income based dynamic panel models with fixed effects

    OpenAIRE

    Yoseph Yilma Getachew; Keshab Bhattarai; Parantap Basu

    2012-01-01

    The fixed effects (FE) estimator of "conditional convergence" in income based dynamic panel models could be biased downward when capital adjustment cost is present. Such a capital adjustment cost means a rising marginal cost of investment which could slow down the convergence. The standard FE regression fails to take into account of this capital adjustment cost and thus it could overestimate the rate of convergence. Using a Ramsey model with long-run adjustment cost of capital, we characteriz...

  2. Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory

    DEFF Research Database (Denmark)

    Li, Yan; Shuai, Zhikang; Xu, Qinming

    2016-01-01

    Droop control framework with an adjustable virtual impedance loop is proposed in this paper, which is based on the cloud model theory. The proposed virtual impedance loop includes two terms: a negative virtual resistor and an adjustable virtual inductance. The negative virtual resistor term...... sometimes. The cloud model theory is applied to get online the changing line impedance value, which relies on the relevance of the reactive power responding the changing line impedance. The verification of the proposed control strategy is done according to the simulation in a low voltage microgrid in Matlab....

  3. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  4. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  5. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  6. Economic analysis of coal price-electricity price adjustment in China based on the CGE model

    International Nuclear Information System (INIS)

    He, Y.X.; Zhang, S.L.; Yang, L.Y.; Wang, Y.J.; Wang, J.

    2010-01-01

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence.

  7. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  8. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    Science.gov (United States)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and

  9. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    . As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...

  10. ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS

    Directory of Open Access Journals (Sweden)

    Krasil'nikov Vitaliy Mikhaylovich

    2012-10-01

    Full Text Available The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued in respect of the «Application of water resource regulations governing the operation of waterworks facilities of power plants». The technology described there is obsolete due to the availability of specialized software. The computer technique is based on a digital terrain model. The authors provide an overview of the technique implemented at Rybinsk and Gorkiy water basins in this article. Thus, the digital terrain model generated on the basis of the field data is used at Gorkiy water basin, while the model based on maps and charts is applied at Rybinsk water basin. The authors believe that the software technique can be applied to any other water basin on the basis of the analysis and comparison of morphometric characteristics of the two water basins.

  11. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  12. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  13. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan

    Directory of Open Access Journals (Sweden)

    Weiner Jonathan P

    2010-01-01

    Full Text Available Abstract Background Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. Methods A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234, while those in both 2002 and 2003 were included for prospective analyses (n = 164,562. Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. Results The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster. When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Conclusions Given the

  14. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

    Science.gov (United States)

    Chang, Hsien-Yen; Weiner, Jonathan P

    2010-01-18

    Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory

  15. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    Directory of Open Access Journals (Sweden)

    Santiago Martinez

    2018-03-01

    Full Text Available The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM, is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP. Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics.

  16. Model-based Adjustment of Droplet Characteristic for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available The major challenge in 3D electronic printing is the print resolution and accuracy. In this paper, a typical mode - lumped element modeling method (LEM - is adopted to simulate the droplet jetting characteristic. This modeling method can quickly get the droplet velocity and volume with a high accuracy. Experimental results show that LEM has a simpler structure with the sufficient simulation and prediction accuracy.

  17. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  18. Effect of Treatment Education Based on the Roy Adaptation Model on Adjustment of Hemodialysis Patients.

    Science.gov (United States)

    Kacaroglu Vicdan, Ayse; Gulseven Karabacak, Bilgi

    2016-01-01

    The Roy Adaptation Model examines the individual in 4 fields: physiological mode, self-concept mode, role function mode, and interdependence mode. Hemodialysis treatment is associated with the Roy Adaptation Model as it involves fields that might be needed by the individual with chronic renal disease. This research was conducted as randomized controlled experiment with the aim of determining the effect of the education given in accordance with the Roy Adaptation Model on physiological, psychological, and social adaptation of individuals undergoing hemodialysis treatment. This was a random controlled experimental study. The study was conducted at a dialysis center in Konya-Aksehir in Turkey between July 1 and December 31, 2012. The sample was composed of 82 individuals-41 experimental and 41 control. In the second interview, there was a decrease in the systolic blood pressures and body weights of the experimental group, an increase in the scores of functional performance and self-respect, and a decrease in the scores of psychosocial adaptation. In the control group, on the other hand, there was a decrease in the scores of self-respect and an increase in the scores of psychosocial adaptation. The 2 groups were compared in terms of adaptation variables and a difference was determined on behalf of the experimental group. The training that was provided and evaluated for individuals receiving hemodialysis according to 4 modes of the Roy Adaptation Model increased physical, psychological, and social adaptation.

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen

    2016-01-01

    estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable......Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling...... overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5–30 min of rain data recorded by multiple rain gauges and propagating the rainfall...

  1. The anchor-based minimal important change, based on receiver operating characteristic analysis or predictive modeling, may need to be adjusted for the proportion of improved patients.

    Science.gov (United States)

    Terluin, Berend; Eekhout, Iris; Terwee, Caroline B

    2017-03-01

    Patients have their individual minimal important changes (iMICs) as their personal benchmarks to determine whether a perceived health-related quality of life (HRQOL) change constitutes a (minimally) important change for them. We denote the mean iMIC in a group of patients as the "genuine MIC" (gMIC). The aims of this paper are (1) to examine the relationship between the gMIC and the anchor-based minimal important change (MIC), determined by receiver operating characteristic analysis or by predictive modeling; (2) to examine the impact of the proportion of improved patients on these MICs; and (3) to explore the possibility to adjust the MIC for the influence of the proportion of improved patients. Multiple simulations of patient samples involved in anchor-based MIC studies with different characteristics of HRQOL (change) scores and distributions of iMICs. In addition, a real data set is analyzed for illustration. The receiver operating characteristic-based and predictive modeling MICs equal the gMIC when the proportion of improved patients equals 0.5. The MIC is estimated higher than the gMIC when the proportion improved is greater than 0.5, and the MIC is estimated lower than the gMIC when the proportion improved is less than 0.5. Using an equation including the predictive modeling MIC, the log-odds of improvement, the standard deviation of the HRQOL change score, and the correlation between the HRQOL change score and the anchor results in an adjusted MIC reflecting the gMIC irrespective of the proportion of improved patients. Adjusting the predictive modeling MIC for the proportion of improved patients assures that the adjusted MIC reflects the gMIC. We assumed normal distributions and global perceived change scores that were independent on the follow-up score. Additionally, floor and ceiling effects were not taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    Science.gov (United States)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  3. Motor models and transient analysis for high-temperature, superconductor switch-based adjustable speed drive applications. Final report

    International Nuclear Information System (INIS)

    Bailey, J.M.

    1996-06-01

    New high-temperature superconductor (HTSC) technology may allow development of an energy-efficient power electronics switch for adjustable speed drive (ASD) applications involving variable-speed motors, superconducting magnetic energy storage systems, and other power conversion equipment. This project developed a motor simulation module for determining optimal applications of HTSC-based power switches in ASD systems

  4. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen

    2016-08-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.

  5. Adjustment or updating of models

    Indian Academy of Sciences (India)

    25, Part 3, June 2000, pp. 235±245 ... While the model is defined in terms of these spatial parameters, ... discussed in terms of `model order' with concern focused on whether or not the ..... In other words, it is not easy to justify what the required.

  6. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  7. Adjustment model of thermoluminescence experimental data

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.; Moreno B, A.

    2002-01-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a i * exp (-1/b i * (T-C i )) where: a i , b i , c i are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  8. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Hiddo Velsink

    2015-01-01

    Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices

  9. Extendable linearised adjustment model for deformation analysis

    NARCIS (Netherlands)

    Velsink, H.

    2015-01-01

    This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation

  10. Premium adjustment: actuarial analysis on epidemiological models ...

    African Journals Online (AJOL)

    In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes extremely difficult for the insurer since a negative reserve would severely increase its risk of insolvency, ...

  11. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study.

    Directory of Open Access Journals (Sweden)

    Alessandro Cassini

    2016-10-01

    Full Text Available Estimating the burden of healthcare-associated infections (HAIs compared to other communicable diseases is an ongoing challenge given the need for good quality data on the incidence of these infections and the involved comorbidities. Based on the methodology of the Burden of Communicable Diseases in Europe (BCoDE project and 2011-2012 data from the European Centre for Disease Prevention and Control (ECDC point prevalence survey (PPS of HAIs and antimicrobial use in European acute care hospitals, we estimated the burden of six common HAIs.The included HAIs were healthcare-associated pneumonia (HAP, healthcare-associated urinary tract infection (HA UTI, surgical site infection (SSI, healthcare-associated Clostridium difficile infection (HA CDI, healthcare-associated neonatal sepsis, and healthcare-associated primary bloodstream infection (HA primary BSI. The burden of these HAIs was measured in disability-adjusted life years (DALYs. Evidence relating to the disease progression pathway of each type of HAI was collected through systematic literature reviews, in order to estimate the risks attributable to HAIs. For each of the six HAIs, gender and age group prevalence from the ECDC PPS was converted into incidence rates by applying the Rhame and Sudderth formula. We adjusted for reduced life expectancy within the hospital population using three severity groups based on McCabe score data from the ECDC PPS. We estimated that 2,609,911 new cases of HAI occur every year in the European Union and European Economic Area (EU/EEA. The cumulative burden of the six HAIs was estimated at 501 DALYs per 100,000 general population each year in EU/EEA. HAP and HA primary BSI were associated with the highest burden and represented more than 60% of the total burden, with 169 and 145 DALYs per 100,000 total population, respectively. HA UTI, SSI, HA CDI, and HA primary BSI ranked as the third to sixth syndromes in terms of burden of disease. HAP and HA primary BSI were

  12. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study

    Science.gov (United States)

    Eckmanns, Tim; Abu Sin, Muna; Ducomble, Tanja; Harder, Thomas; Sixtensson, Madlen; Velasco, Edward; Weiß, Bettina; Kramarz, Piotr; Monnet, Dominique L.; Kretzschmar, Mirjam E.; Suetens, Carl

    2016-01-01

    Background Estimating the burden of healthcare-associated infections (HAIs) compared to other communicable diseases is an ongoing challenge given the need for good quality data on the incidence of these infections and the involved comorbidities. Based on the methodology of the Burden of Communicable Diseases in Europe (BCoDE) project and 2011–2012 data from the European Centre for Disease Prevention and Control (ECDC) point prevalence survey (PPS) of HAIs and antimicrobial use in European acute care hospitals, we estimated the burden of six common HAIs. Methods and Findings The included HAIs were healthcare-associated pneumonia (HAP), healthcare-associated urinary tract infection (HA UTI), surgical site infection (SSI), healthcare-associated Clostridium difficile infection (HA CDI), healthcare-associated neonatal sepsis, and healthcare-associated primary bloodstream infection (HA primary BSI). The burden of these HAIs was measured in disability-adjusted life years (DALYs). Evidence relating to the disease progression pathway of each type of HAI was collected through systematic literature reviews, in order to estimate the risks attributable to HAIs. For each of the six HAIs, gender and age group prevalence from the ECDC PPS was converted into incidence rates by applying the Rhame and Sudderth formula. We adjusted for reduced life expectancy within the hospital population using three severity groups based on McCabe score data from the ECDC PPS. We estimated that 2,609,911 new cases of HAI occur every year in the European Union and European Economic Area (EU/EEA). The cumulative burden of the six HAIs was estimated at 501 DALYs per 100,000 general population each year in EU/EEA. HAP and HA primary BSI were associated with the highest burden and represented more than 60% of the total burden, with 169 and 145 DALYs per 100,000 total population, respectively. HA UTI, SSI, HA CDI, and HA primary BSI ranked as the third to sixth syndromes in terms of burden of disease

  13. OPEC model : adjustment or new model

    International Nuclear Information System (INIS)

    Ayoub, A.

    1994-01-01

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  14. On the possibility of calibrating urban storm-water drainage models using gauge-based adjusted radar rainfall estimates

    OpenAIRE

    Ochoa-Rodriguez, S; Wang, L; Simoes, N; Onof, C; Maksimovi?, ?

    2013-01-01

    24/07/14 meb. Authors did not sign CTA. Traditionally, urban storm water drainage models have been calibrated using only raingauge data, which may result in overly conservative models due to the lack of spatial description of rainfall. With the advent of weather radars, radar rainfall estimates with higher temporal and spatial resolution have become increasingly available and have started to be used operationally for urban storm water model calibration and real time operation. Nonetheless,...

  15. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information

    NARCIS (Netherlands)

    L.M. Lamers (Leida)

    1999-01-01

    textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness

  16. Evaluation and adjustment of description of denitrification in the DailyDayCent and COUP models based on N2 and N2O laboratory incubation system measurements

    Science.gov (United States)

    Grosz, Balázs; Well, Reinhard; Dannenmann, Michael; Dechow, René; Kitzler, Barbara; Michel, Kerstin; Reent Köster, Jan

    2017-04-01

    data-sets are needed in view of the extreme spatio-temporal heterogeneity of denitrification. DASIM will provide such data based on laboratory incubations including measurement of N2O and N2 fluxes and determination of the relevant drivers. Here, we present how we will use these data to evaluate common biogeochemical process models (DailyDayCent, Coup) with respect to modeled NO, N2O and N2 fluxes from denitrification. The models are used with different settings. The first approximation is the basic "factory" setting of the models. The next step would show the precision in the results of the modeling after adjusting the appropriate parameters from the result of the measurement values and the "factory" results. The better adjustment and the well-controlled input and output measured parameters could provide a better understanding of the probable scantiness of the tested models which will be a basis for future model improvement.

  17. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  18. [Factors affecting in-hospital mortality in patients with sepsis: Development of a risk-adjusted model based on administrative data from German hospitals].

    Science.gov (United States)

    König, Volker; Kolzter, Olaf; Albuszies, Gerd; Thölen, Frank

    2018-05-01

    Inpatient administrative data from hospitals is already used nationally and internationally in many areas of internal and public quality assurance in healthcare. For sepsis as the principal condition, only a few published approaches are available for Germany. The aim of this investigation is to identify factors influencing hospital mortality by employing appropriate analytical methods in order to improve the internal quality management of sepsis. The analysis was based on data from 754,727 DRG cases of the CLINOTEL hospital network charged in 2015. The association then included 45 hospitals of all supply levels with the exception of university hospitals (range of beds: 100 to 1,172 per hospital). Cases of sepsis were identified via the ICD codes of their principal diagnosis. Multiple logistic regression analysis was used to determine the factors influencing in-hospital lethality for this population. The model was developed using sociodemographic and other potential variables that could be derived from the DRG data set, and taking into account current literature data. The model obtained was validated with inpatient administrative data of 2016 (51 hospitals, 850,776 DRG cases). Following the definition of the inclusion criteria, 5,608 cases of sepsis (2016: 6,384 cases) were identified in 2015. A total of 12 significant and, over both years, stable factors were identified, including age, severity of sepsis, reason for hospital admission and various comorbidities. The AUC value of the model, as a measure of predictability, is above 0.8 (H-L test p>0.05, R 2 value=0.27), which is an excellent result. The CLINOTEL model of risk adjustment for in-hospital lethality can be used to determine the mortality probability of patients with sepsis as principal diagnosis with a very high degree of accuracy, taking into account the case mix. Further studies are needed to confirm whether the model presented here will prove its value in the internal quality assurance of hospitals

  19. Player Modeling for Intelligent Difficulty Adjustment

    Science.gov (United States)

    Missura, Olana; Gärtner, Thomas

    In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.

  20. Size-Adjustable Microdroplets Generation Based on Microinjection

    Directory of Open Access Journals (Sweden)

    Shibao Li

    2017-03-01

    Full Text Available Microinjection is a promising tool for microdroplet generation, while the microinjection for microdroplets generation still remains a challenging issue due to the Laplace pressure at the micropipette opening. Here, we apply a simple and robust substrate-contacting microinjection method to microdroplet generation, presenting a size-adjustable microdroplets generation method based on a critical injection (CI model. Firstly, the micropipette is adjusted to a preset injection pressure. Secondly, the micropipette is moved down to contact the substrate, then, the Laplace pressure in the droplet is no longer relevant and the liquid flows out in time. The liquid constantly flows out until the micropipette is lifted, ending the substrate-contacting situation, which results in the recovery of the Laplace pressure at the micropipette opening, and the liquid injection is terminated. We carry out five groups of experiments whereupon 1600 images are captured within each group and the microdroplet radius of each image is detected. Then we determine the relationship among microdroplet radius, radius at the micropipette opening, time, and pressure, and, two more experiments are conducted to verify the relationship. To verify the effectiveness of the substrate-contacting method and the relationship, we conducted two experiments with six desired microdroplet radii are set in each experiment, by adjusting the injection time with a given pressure, and adjusting the injection pressure with a given time. Then, six arrays of microdroplets are obtained in each experiment. The results of the experiments show that the standard errors of the microdroplet radii are less than 2% and the experimental errors fall in the range of ±5%. The average operating speed is 20 microdroplets/min and the minimum radius of the microdroplets is 25 μm. This method has a simple experimental setup that enables easy manipulation and lower cost.

  1. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  2. Diagnosis-Based Risk Adjustment for Medicare Capitation Payments

    Science.gov (United States)

    Ellis, Randall P.; Pope, Gregory C.; Iezzoni, Lisa I.; Ayanian, John Z.; Bates, David W.; Burstin, Helen; Ash, Arlene S.

    1996-01-01

    Using 1991-92 data for a 5-percent Medicare sample, we develop, estimate, and evaluate risk-adjustment models that utilize diagnostic information from both inpatient and ambulatory claims to adjust payments for aged and disabled Medicare enrollees. Hierarchical coexisting conditions (HCC) models achieve greater explanatory power than diagnostic cost group (DCG) models by taking account of multiple coexisting medical conditions. Prospective models predict average costs of individuals with chronic conditions nearly as well as concurrent models. All models predict medical costs far more accurately than the current health maintenance organization (HMO) payment formula. PMID:10172666

  3. Model for Adjustment of Aggregate Forecasts using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Taracena–Sanz L. F.

    2010-07-01

    Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.

  4. Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs

    Science.gov (United States)

    Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.

    2004-01-01

    This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementation can accelerate encounter data submission, (2) administrative decisions made during implementation can create issues that impact payment model performance, and (3) changes in diagnosis data management during implementation may require changes to the payment model. PMID:25372356

  5. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  6. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  7. PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.

  8. Parenting Stress, Mental Health, Dyadic Adjustment: A Structural Equation Model

    Directory of Open Access Journals (Sweden)

    Luca Rollè

    2017-05-01

    Full Text Available Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms, and dyadic adjustment among first-time parents.Method: We studied 268 parents (134 couples of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment.Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers.Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood.

  9. Color adjustable LED driver design based on PWM

    Science.gov (United States)

    Du, Yiying; Yu, Caideng; Que, Longcheng; Zhou, Yun; Lv, Jian

    2012-10-01

    Light-emitting diode (LED) is a liquid cold source light source that rapidly develops in recent years. The merits of high brightness efficiency, long duration, high credibility and no pollution make it satisfy our demands for consumption and natural life, and gradually replace the traditional lamp-house-incandescent light and fluorescent light. However, because of the high cost and unstable drive circuit, the application range is restricted. To popularize the applications of the LED, we focus on improving the LED driver circuit to change this phenomenon. Basing on the traditional LED drive circuit, we adopt pre-setup constant current model and introduce pulse width modulation (PWM) control method to realize adjustable 256 level-grays display. In this paper, basing on human visual characteristics and the traditional PWM control method, we propose a new PWM control timing clock to alter the duty cycle of PWM signal to realize the simple gamma correction. Consequently, the brightness can accord with our visual characteristics.

  10. Husbands' perceptions of their wives' breast cancer coping efficacy: testing congruence models of adjustment.

    Science.gov (United States)

    Merluzzi, Thomas V; Martinez Sanchez, MaryAnn

    2018-01-01

    Recent reviews have reinforced the notion that having a supportive spouse can help with the process of coping with and adjusting to cancer. Congruence between spouses' perspectives has been proposed as one mechanism in that process, yet alternative models of congruence have not been examined closely. This study assessed alternative models of congruence in perceptions of coping and their mediating effects on adjustment to breast cancer. Seventy-two women in treatment for breast cancer and their husbands completed measures of marital adjustment, self-efficacy for coping, and adjustment to cancer. Karnofsky Performance Status was obtained from medical records. Wives completed a measure of self-efficacy for coping (wives' ratings of self-efficacy for coping [WSEC]) and husbands completed a measure of self-efficacy for coping (husbands' ratings of wives' self-efficacy for coping [HSEC]) based on their perceptions of their wives' coping efficacy. Interestingly, the correlation between WSEC and HSEC was only 0.207; thus, they are relatively independent perspectives. The following three models were tested to determine the nature of the relationship between WSEC and HSEC: discrepancy model (WSEC - HSEC), additive model (WSEC + HSEC), and multiplicative model (WSEC × HSEC). The discrepancy model was not related to wives' adjustment; however, the additive ( B =0.205, P <0.001) and multiplicative ( B =0.001, P <0.001) models were significantly related to wives' adjustment. Also, the additive model mediated the relationship between performance status and adjustment. Husbands' perception of their wives' coping efficacy contributed marginally to their wives' adjustment, and the combination of WSEC and HSEC mediated the relationship between functional status and wives' adjustment, thus positively impacting wives' adjustment to cancer. Future research is needed to determine the quality of the differences between HSEC and WSEC in order to develop interventions to optimize the

  11. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  12. Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis  a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice  and flour are not significant  to  changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the  government also began  to  socialize  in a lifestyle  of  non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population

  13. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Directory of Open Access Journals (Sweden)

    Chung-Cheng Chiu

    2016-06-01

    Full Text Available Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA, which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.

  14. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Science.gov (United States)

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  15. AUTOMATIC ADJUSTMENT OF WIDE-BASE GOOGLE STREET VIEW PANORAMAS

    Directory of Open Access Journals (Sweden)

    E. Boussias-Alexakis

    2016-06-01

    Full Text Available This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  16. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  17. The high-density lipoprotein-adjusted SCORE model worsens SCORE-based risk classification in a contemporary population of 30 824 Europeans

    DEFF Research Database (Denmark)

    Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G

    2015-01-01

    .8 years of follow-up, 339 individuals died of CVD. In the SCORE target population (age 40-65; n = 30,824), fewer individuals were at baseline categorized as high risk (≥5% 10-year risk of fatal CVD) using SCORE-HDL compared with SCORE (10 vs. 17% in men, 1 vs. 3% in women). SCORE-HDL did not improve...... with SCORE, but deteriorated risk classification based on NRI. Future guidelines should consider lower decision thresholds and prioritize CVD morbidity and people above age 65....

  18. Engine control system having fuel-based adjustment

    Science.gov (United States)

    Willi, Martin L [Dunlap, IL; Fiveland, Scott B [Metamora, IL; Montgomery, David T [Edelstein, IL; Gong, Weidong [Dunlap, IL

    2011-03-15

    A control system for an engine having a cylinder is disclosed having an engine valve configured to affect a fluid flow of the cylinder, an actuator configured to move the engine valve, and an in-cylinder sensor configured to generate a signal indicative of a characteristic of fuel entering the cylinder. The control system also has a controller in communication with the actuator and the sensor. The controller is configured to determine the characteristic of the fuel based on the signal and selectively regulate the actuator to adjust a timing of the engine valve based on the characteristic of the fuel.

  19. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  20. Capacitance-Based Frequency Adjustment of Micro Piezoelectric Vibration Generator

    Directory of Open Access Journals (Sweden)

    Xinhua Mao

    2014-01-01

    Full Text Available Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.

  1. Capacitance-based frequency adjustment of micro piezoelectric vibration generator.

    Science.gov (United States)

    Mao, Xinhua; He, Qing; Li, Hong; Chu, Dongliang

    2014-01-01

    Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.

  2. Implication of Mauk Nursing Rehabilitation Model on Adjustment of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Zeinab Ebrahimpour mouziraji

    2014-12-01

    Full Text Available Objectives: Stroke is a neurological syndrome with sudden onset or gradual destruction of brain vessels, which may take 24 hours or more. Complications of stroke effect in the variation aspects of the individual. According to De Spulveda and Chang’s Studies, disability reduced the effective adjustment. This study aimed to overview the adjustment of stroke patients based on the main concepts of rehabilitation nursing Mauk model. Methods: In a quasi-experimental one group pre-posttest design study, data was collected in the neurology clinic of Imam Khomeini hospital and stroke patient rehabilitation centers in Tehran (Tabassom. Data collection included demographic and adjustment questionnaires of stroke patients. The intervention included seven sessions as Mauk model, each session with one hour training, for seven patients. Data analysis performed with SPSS software with paired t-test and was compared with previous results. Results: There were significant differences between the mean scores of patients with stroke adjustment questionnaire in the pre-test-post-test. But in the adjustment sub-scales, except for relationship with wife and Personal adjustment, in other areas, there is no statistically significant difference between the pre and posttest. Discussion: The results indicated that training has been affected on some aspects of adjustment of stroke patients in order to, as improving functions, complications and its limitations. Nurses can help then with implementing of plans such as patients education in this regard.

  3. Capital Structure: Target Adjustment Model and a Mediation Moderation Model with Capital Structure as Mediator

    OpenAIRE

    Abedmajid, Mohammed

    2015-01-01

    This study consists of two models. Model one is conducted to check if there is a target adjustment toward optimal capital structure, in the context of Turkish firm listed on the stock market, over the period 2003-2014. Model 2 captures the interaction between firm size, profitability, market value and capital structure using the moderation mediation model. The results of model 1 have shown that there is a partial adjustment of the capital structure to reach target levels. The results of...

  4. Risk adjustment models for short-term outcomes after surgical resection for oesophagogastric cancer.

    Science.gov (United States)

    Fischer, C; Lingsma, H; Hardwick, R; Cromwell, D A; Steyerberg, E; Groene, O

    2016-01-01

    Outcomes for oesophagogastric cancer surgery are compared with the aim of benchmarking quality of care. Adjusting for patient characteristics is crucial to avoid biased comparisons between providers. The study objective was to develop a case-mix adjustment model for comparing 30- and 90-day mortality and anastomotic leakage rates after oesophagogastric cancer resections. The study reviewed existing models, considered expert opinion and examined audit data in order to select predictors that were consequently used to develop a case-mix adjustment model for the National Oesophago-Gastric Cancer Audit, covering England and Wales. Models were developed on patients undergoing surgical resection between April 2011 and March 2013 using logistic regression. Model calibration and discrimination was quantified using a bootstrap procedure. Most existing risk models for oesophagogastric resections were methodologically weak, outdated or based on detailed laboratory data that are not generally available. In 4882 patients with oesophagogastric cancer used for model development, 30- and 90-day mortality rates were 2·3 and 4·4 per cent respectively, and 6·2 per cent of patients developed an anastomotic leak. The internally validated models, based on predictors selected from the literature, showed moderate discrimination (area under the receiver operating characteristic (ROC) curve 0·646 for 30-day mortality, 0·664 for 90-day mortality and 0·587 for anastomotic leakage) and good calibration. Based on available data, three case-mix adjustment models for postoperative outcomes in patients undergoing curative surgery for oesophagogastric cancer were developed. These models should be used for risk adjustment when assessing hospital performance in the National Health Service, and tested in other large health systems. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  5. Parallax adjustment algorithm based on Susan-Zernike moments

    Science.gov (United States)

    Deng, Yan; Zhang, Kun; Shen, Xiaoqin; Zhang, Huiyun

    2018-02-01

    Precise parallax detection through definition evaluation and the adjustment of the assembly position of the objective lens or the reticle are important means of eliminating the parallax of the telescope system, so that the imaging screen and the reticle are clearly focused at the same time. An adaptive definition evaluation function based on Susan-Zernike moments is proposed. First, the image is preprocessed by the Susan operator to find the potential boundary edge. Then, the Zernike moments operator is used to determine the exact region of the reticle line with sub-pixel accuracy. The image definition is evaluated only in this related area. The evaluation function consists of the gradient difference calculated by the Zernike moments operator. By adjusting the assembly position of the objective lens, the imaging screen and the reticle will be simultaneously in the state of maximum definition, so the parallax can be eliminated. The experimental results show that the definition evaluation function proposed in this paper has the advantages of good focusing performance, strong anti-interference ability compared with the other commonly used definition evaluation functions.

  6. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    Science.gov (United States)

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  7. Medicare and Medicaid Programs; CY 2018 Home Health Prospective Payment System Rate Update and CY 2019 Case-Mix Adjustment Methodology Refinements; Home Health Value-Based Purchasing Model; and Home Health Quality Reporting Requirements. Final rule.

    Science.gov (United States)

    2017-11-07

    This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.

  8. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  9. A Unified Model of Geostrophic Adjustment and Frontogenesis

    Science.gov (United States)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  10. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Science.gov (United States)

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).

  11. Controlling chaos based on an adaptive adjustment mechanism

    International Nuclear Information System (INIS)

    Zheng Yongai

    2006-01-01

    In this paper, we extend the ideas and techniques developed by Huang [Huang W. Stabilizing nonlinear dynamical systems by an adaptive adjustment mechanism. Phys Rev E 2000;61:R1012-5] for controlling discrete-time chaotic system using adaptive adjustment mechanism to continuous-time chaotic system. Two control approaches, namely adaptive adjustment mechanism (AAM) and modified adaptive adjustment mechanism (MAAM), are investigated. In both case sufficient conditions for the stabilization of chaotic systems are given analytically. The simulation results on Chen chaotic system have verified the effectiveness of the proposed techniques

  12. Predicting Couples’ Happiness Based on Spiritual Intelligence and Lovemaking Styles: The Mediating Role of Marital adjustment

    Directory of Open Access Journals (Sweden)

    ZAHRA KERMANI MAMAZANDI

    2017-02-01

    Full Text Available The purpose of this study was to predict couples’ happiness based on spiritual intelligence and lovemaking styles with the mediating role of marital adjustment. Therefore 360 male and female, married students living in Tehran University dormitory were randomly selected and were asked to answer the items of Sternberg’s Love Questionnaire, King’s Spiritual Intelligence Scale, Oxford’s Happiness Questionnaire and Spanier’s Marital Adjustment Questionnaire. Structural equation modeling (path analysis was used for data analysis. The results  of path analysis showed  that spiritual intelligence and lovemaking styles have direct effects on couples’ happiness, and the spiritual intelligence did not have an indirect effect on couples’ happiness whereas lovemaking styles had indirect effects on couples’ happiness through martial satisfaction. Altogether the results of this research show that marital adjustment has a mediating role in predicting couples’ happiness based on spiritual intelligence and lovemaking styles.

  13. The Implementation of an Integrative Model of Adventure-Based Counseling and Adlerian Play Therapy Value-Based Taught by Parents to Children to Increase Adjustment Ability of Preschool Children

    Directory of Open Access Journals (Sweden)

    Rita Eka Izzaty

    2016-11-01

    Full Text Available This study was conducted for two reasons. First, pre-school age is the foundation for the subsequent development. Second, the previous research findings show that there are behavioral problems which affect the development of subsequent development. This study aims to increase children’s social ability by employing An Integrative Model of Adventure-Based Counseling and Adlerian Play Therapy, a counseling model emphasizing the importance of play providing opportunity for children to express their feelings in natural situation and insight toward personality and environment by modifying teaching cultural values by parents to children. This study employed an action study. The prior data collection techniques were conducting literary study, surveying on cultural values taught, and selecting a counseling model. The subjects were four preschool children (4-6 years old with behavioral problems. The study was conducted in 2 cycles with several steps: planning, action, evaluation, and reflection. The finding of this research shows that an Integrative Model of Adventure-Based Counseling and Adlerian Play Therapy can increase children social ability and decrease non adaptive behavior. The reflection of employing counseling model modified with cultural values taught by parents to children is when using this model, counselors truly examine the duration of the implementation, the children’s condition, the counselors’ condition, the type of play, and the purpose of each steps which must be detailed.

  14. Health-based risk adjustment: is inpatient and outpatient diagnostic information sufficient?

    Science.gov (United States)

    Lamers, L M

    Adequate risk adjustment is critical to the success of market-oriented health care reforms in many countries. Currently used risk adjusters based on demographic and diagnostic cost groups (DCGs) do not reflect expected costs accurately. This study examines the simultaneous predictive accuracy of inpatient and outpatient morbidity measures and prior costs. DCGs, pharmacy cost groups (PCGs), and prior year's costs improve the predictive accuracy of the demographic model substantially. DCGs and PCGs seem complementary in their ability to predict future costs. However, this study shows that the combination of DCGs and PCGs still leaves room for cream skimming.

  15. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Directory of Open Access Journals (Sweden)

    L.-P. Wang

    2015-09-01

    Full Text Available Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2 (Edinburgh, UK during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban

  16. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Science.gov (United States)

    Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.

    2015-09-01

    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system

  17. Adjustable wideband reflective converter based on cut-wire metasurface

    International Nuclear Information System (INIS)

    Zhang, Linbo; Zhou, Peiheng; Chen, Haiyan; Lu, Haipeng; Xie, Jianliang; Deng, Longjiang

    2015-01-01

    We present the design, analysis, and measurement of a broadband reflective converter using a cut-wire (CW) metasurface. Based on the characteristics of LC resonances, the proposed reflective converter can rotate a linearly polarized (LP) wave into its cross-polarized wave at three resonance frequencies, or convert the LP wave to a circularly polarized (CP) wave at two other resonance frequencies. Furthermore, the broad-band properties of the polarization conversion can be sustained when the incident wave is a CP wave. The polarization states can be adjusted easily by changing the length and width of the CW. The measured results show that a polarization conversion ratio (PCR) over 85% can be achieved from 6.16 GHz to 16.56 GHz for both LP and CP incident waves. The origin of the polarization conversion is interpreted by the theory of microwave antennas, with equivalent impedance and electromagnetic (EM) field distributions. With its simple geometry and multiple broad frequency bands, the proposed converter has potential applications in the area of selective polarization control. (paper)

  18. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  19. design and implementation of a microcontroller-based adjustable ...

    African Journals Online (AJOL)

    HOD

    1, DEPARTMENT OF COMPUTER ENGINEERING, UNIVERSITY OF BENIN, BENIN CITY, EDO STATE, NIGERIA ... ICs, to design a user friendly charger circuit that manually adjusts the voltage .... the multivibrator integrated circuit does it.

  20. Testing a social ecological model for relations between political violence and child adjustment in Northern Ireland.

    Science.gov (United States)

    Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-05-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.

  1. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  2. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    Science.gov (United States)

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  3. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Adjustment costs in a two-capital growth model

    Czech Academy of Sciences Publication Activity Database

    Duczynski, Petr

    2002-01-01

    Roč. 26, č. 5 (2002), s. 837-850 ISSN 0165-1889 R&D Projects: GA AV ČR KSK9058117 Institutional research plan: CEZ:AV0Z7085904 Keywords : adjustment costs * capital mobility * convergence * human capital Subject RIV: AH - Economics Impact factor: 0.738, year: 2002

  5. NWP-Based Adjustment of IMERG Precipitation for Flood-Inducing Complex Terrain Storms: Evaluation over CONUS

    Directory of Open Access Journals (Sweden)

    Xinxuan Zhang

    2018-04-01

    Full Text Available This paper evaluates the use of precipitation forecasts from a numerical weather prediction (NWP model for near-real-time satellite precipitation adjustment based on 81 flood-inducing heavy precipitation events in seven mountainous regions over the conterminous United States. The study is facilitated by the National Center for Atmospheric Research (NCAR real-time ensemble forecasts (called model, the Integrated Multi-satellitE Retrievals for GPM (IMERG near-real-time precipitation product (called raw IMERG and the Stage IV multi-radar/multi-sensor precipitation product (called Stage IV used as a reference. We evaluated four precipitation datasets (the model forecasts, raw IMERG, gauge-adjusted IMERG and model-adjusted IMERG through comparisons against Stage IV at six-hourly and event length scales. The raw IMERG product consistently underestimated heavy precipitation in all study regions, while the domain average rainfall magnitudes exhibited by the model were fairly accurate. The model exhibited error in the locations of intense precipitation over inland regions, however, while the IMERG product generally showed correct spatial precipitation patterns. Overall, the model-adjusted IMERG product performed best over inland regions by taking advantage of the more accurate rainfall magnitude from NWP and the spatial distribution from IMERG. In coastal regions, although model-based adjustment effectively improved the performance of the raw IMERG product, the model forecast performed even better. The IMERG product could benefit from gauge-based adjustment, as well, but the improvement from model-based adjustment was consistently more significant.

  6. Block adjustment of airborne InSAR based on interferogram phase and POS data

    Science.gov (United States)

    Yue, Xijuan; Zhao, Yinghui; Han, Chunming; Dou, Changyong

    2015-12-01

    High-precision surface elevation information in large scale can be obtained efficiently by airborne Interferomatric Synthetic Aperture Radar (InSAR) system, which is recently becoming an important tool to acquire remote sensing data and perform mapping applications in the area where surveying and mapping is difficult to be accomplished by spaceborne satellite or field working. . Based on the study of the three-dimensional (3D) positioning model using interferogram phase and Position and Orientation System (POS) data and block adjustment error model, a block adjustment method to produce seamless wide-area mosaic product generated from airborne InSAR data is proposed in this paper. The effect of 6 parameters, including trajectory and attitude of the aircraft, baseline length and incline angle, slant range, and interferometric phase, on the 3D positioning accuracy is quantitatively analyzed. Using the data acquired in the field campaign conducted in Mianyang county Sichuan province, China in June 2011, a mosaic seamless Digital Elevation Model (DEM) product was generated from 76 images in 4 flight strips by the proposed block adjustment model. The residuals of ground control points (GCPs), the absolute positioning accuracy of check points (CPs) and the relative positioning accuracy of tie points (TPs) both in same and adjacent strips were assessed. The experimental results suggest that the DEM and Digital Orthophoto Map (DOM) product generated by the airborne InSAR data with sparse GCPs can meet mapping accuracy requirement at scale of 1:10 000.

  7. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    Science.gov (United States)

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  8. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    Science.gov (United States)

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  9. FLC based adjustable speed drives for power quality enhancement

    Directory of Open Access Journals (Sweden)

    Sukumar Darly

    2010-01-01

    Full Text Available This study describes a new approach based on fuzzy algorithm to suppress the current harmonic contents in the output of an inverter. Inverter system using fuzzy controllers provide ride-through capability during voltage sags, reduces harmonics, improves power factor and high reliability, less electromagnetic interference noise, low common mode noise and extends output voltage range. A feasible test is implemented by building a model of three-phase impedance source inverter, which is designed and controlled on the basis of proposed considerations. It is verified from the practical point of view that these new approaches are more effective and acceptable to minimize the harmonic distortion and improves the quality of power. Due to the complex algorithm, their realization often calls for a compromise between cost and performance. The proposed optimizing strategies may be applied in variable-frequency dc-ac inverters, UPSs, and ac drives.

  10. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    Directory of Open Access Journals (Sweden)

    M. Casaioli

    2006-01-01

    Full Text Available In the framework of the European VOLTAIRE project (Fifth Framework Programme, simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  11. Player Modeling Using HOSVD towards Dynamic Difficulty Adjustment in Videogames

    OpenAIRE

    Anagnostou , Kostas; Maragoudakis , Manolis

    2012-01-01

    Part 3: Second International Workshop on Computational Intelligence in Software Engineering (CISE 2012); International audience; In this work, we propose and evaluate a Higher Order Singular Value Decomposition (HOSVD) of a tensor as a means to classify player behavior and adjust game difficulty dynamically. Applying this method to player data collected during a plethora of game sessions resulted in a reduction of the dimensionality of the classification problem and a robust classification of...

  12. Structural Adjustment Policy Experiments: The Use of Philippine CGE Models

    OpenAIRE

    Cororaton, Caesar B.

    1994-01-01

    This paper reviews the general structure of the following general computable general equilibrium (CGE): the APEX model, Habito’s second version of the PhilCGE model, Cororaton’s CGE model and Bautista’s first CGE model. These models are chosen as they represent the range of recently constructed CGE models of the Philippine economy. They also represent two schools of thought in CGE modeling: the well defined neoclassical, Walrasian, general equilibrium school where the market-clearing variable...

  13. Risk-adjusted Outcomes of Clinically Relevant Pancreatic Fistula Following Pancreatoduodenectomy: A Model for Performance Evaluation.

    Science.gov (United States)

    McMillan, Matthew T; Soi, Sameer; Asbun, Horacio J; Ball, Chad G; Bassi, Claudio; Beane, Joal D; Behrman, Stephen W; Berger, Adam C; Bloomston, Mark; Callery, Mark P; Christein, John D; Dixon, Elijah; Drebin, Jeffrey A; Castillo, Carlos Fernandez-Del; Fisher, William E; Fong, Zhi Ven; House, Michael G; Hughes, Steven J; Kent, Tara S; Kunstman, John W; Malleo, Giuseppe; Miller, Benjamin C; Salem, Ronald R; Soares, Kevin; Valero, Vicente; Wolfgang, Christopher L; Vollmer, Charles M

    2016-08-01

    To evaluate surgical performance in pancreatoduodenectomy using clinically relevant postoperative pancreatic fistula (CR-POPF) occurrence as a quality indicator. Accurate assessment of surgeon and institutional performance requires (1) standardized definitions for the outcome of interest and (2) a comprehensive risk-adjustment process to control for differences in patient risk. This multinational, retrospective study of 4301 pancreatoduodenectomies involved 55 surgeons at 15 institutions. Risk for CR-POPF was assessed using the previously validated Fistula Risk Score, and pancreatic fistulas were stratified by International Study Group criteria. CR-POPF variability was evaluated and hierarchical regression analysis assessed individual surgeon and institutional performance. There was considerable variability in both CR-POPF risk and occurrence. Factors increasing the risk for CR-POPF development included increasing Fistula Risk Score (odds ratio 1.49 per point, P ratio 3.30, P performance outliers were identified at the surgeon and institutional levels. Of the top 10 surgeons (≥15 cases) for nonrisk-adjusted performance, only 6 remained in this high-performing category following risk adjustment. This analysis of pancreatic fistulas following pancreatoduodenectomy demonstrates considerable variability in both the risk and occurrence of CR-POPF among surgeons and institutions. Disparities in patient risk between providers reinforce the need for comprehensive, risk-adjusted modeling when assessing performance based on procedure-specific complications. Furthermore, beyond inherent patient risk factors, surgical decision-making influences fistula outcomes.

  14. A metallic solution model with adjustable parameter for describing ternary thermodynamic properties from its binary constituents

    International Nuclear Information System (INIS)

    Fang Zheng; Qiu Guanzhou

    2007-01-01

    A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature

  15. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  16. Modeling of an Adjustable Beam Solid State Light

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax OpticStudio ®. The...

  17. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Science.gov (United States)

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  18. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over

  19. 5 CFR 9901.312 - Maximum rates of base salary and adjusted salary.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum rates of base salary and adjusted salary. 9901.312 Section 9901.312 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES....312 Maximum rates of base salary and adjusted salary. (a) Subject to § 9901.105, the Secretary may...

  20. Research on Environmental Adjustment of Cloud Ranch Based on BP Neural Network PID Control

    Science.gov (United States)

    Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming

    2018-01-01

    In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.

  1. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2014-05-01

    Full Text Available In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs, which are measured by the integrated positioning and orientation system (POS of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  2. Novel DC Bias Suppression Device Based on Adjustable Parallel Resistances

    DEFF Research Database (Denmark)

    Wang, Zhixun; Xie, Zhicheng; Liu, Chang

    2018-01-01

    resistances is designed. The mathematical model for global optimal switching of CBDs is established by field-circuit coupling method with the equivalent resistance network of ac system along with the location of substations and ground electrodes. The optimal switching scheme to minimize the global maximum dc...

  3. Demography-adjusted tests of neutrality based on genome-wide SNP data

    KAUST Repository

    Rafajlović, Marina

    2014-08-01

    Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.

  4. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  5. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  6. Dynamically adjustable foot-ground contact model to estimate ground reaction force during walking and running.

    Science.gov (United States)

    Jung, Yihwan; Jung, Moonki; Ryu, Jiseon; Yoon, Sukhoon; Park, Sang-Kyoon; Koo, Seungbum

    2016-03-01

    Human dynamic models have been used to estimate joint kinetics during various activities. Kinetics estimation is in demand in sports and clinical applications where data on external forces, such as the ground reaction force (GRF), are not available. The purpose of this study was to estimate the GRF during gait by utilizing distance- and velocity-dependent force models between the foot and ground in an inverse-dynamics-based optimization. Ten males were tested as they walked at four different speeds on a force plate-embedded treadmill system. The full-GRF model whose foot-ground reaction elements were dynamically adjusted according to vertical displacement and anterior-posterior speed between the foot and ground was implemented in a full-body skeletal model. The model estimated the vertical and shear forces of the GRF from body kinematics. The shear-GRF model with dynamically adjustable shear reaction elements according to the input vertical force was also implemented in the foot of a full-body skeletal model. Shear forces of the GRF were estimated from body kinematics, vertical GRF, and center of pressure. The estimated full GRF had the lowest root mean square (RMS) errors at the slow walking speed (1.0m/s) with 4.2, 1.3, and 5.7% BW for anterior-posterior, medial-lateral, and vertical forces, respectively. The estimated shear forces were not significantly different between the full-GRF and shear-GRF models, but the RMS errors of the estimated knee joint kinetics were significantly lower for the shear-GRF model. Providing COP and vertical GRF with sensors, such as an insole-type pressure mat, can help estimate shear forces of the GRF and increase accuracy for estimation of joint kinetics. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  8. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  9. PACE and the Medicare+Choice risk-adjusted payment model.

    Science.gov (United States)

    Temkin-Greener, H; Meiners, M R; Gruenberg, L

    2001-01-01

    This paper investigates the impact of the Medicare principal inpatient diagnostic cost group (PIP-DCG) payment model on the Program of All-Inclusive Care for the Elderly (PACE). Currently, more than 6,000 Medicare beneficiaries who are nursing home certifiable receive care from PACE, a program poised for expansion under the Balanced Budget Act of 1997. Overall, our analysis suggests that the application of the PIP-DCG model to the PACE program would reduce Medicare payments to PACE, on average, by 38%. The PIP-DCG payment model bases its risk adjustment on inpatient diagnoses and does not capture adequately the risk of caring for a population with functional impairments.

  10. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    Science.gov (United States)

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  11. A Flexure-Based Mechanism for Precision Adjustment of National Ignition Facility Target Shrouds in Three Rotational Degrees of Freedom

    International Nuclear Information System (INIS)

    Boehm, K.-J.; Gibson, C. R.; Hollaway, J. R.; Espinoza-Loza, F.

    2016-01-01

    This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses and beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.

  12. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  13. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    Science.gov (United States)

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  14. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    Science.gov (United States)

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  15. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    Science.gov (United States)

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  17. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Science.gov (United States)

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  18. Risk-adjusted capitation funding models for chronic disease in Australia: alternatives to casemix funding.

    Science.gov (United States)

    Antioch, K M; Walsh, M K

    2002-01-01

    Under Australian casemix funding arrangements that use Diagnosis-Related Groups (DRGs) the average price is policy based, not benchmarked. Cost weights are too low for State-wide chronic disease services. Risk-adjusted Capitation Funding Models (RACFM) are feasible alternatives. A RACFM was developed for public patients with cystic fibrosis treated by an Australian Health Maintenance Organization (AHMO). Adverse selection is of limited concern since patients pay solidarity contributions via Medicare levy with no premium contributions to the AHMO. Sponsors paying premium subsidies are the State of Victoria and the Federal Government. Cost per patient is the dependent variable in the multiple regression. Data on DRG 173 (cystic fibrosis) patients were assessed for heteroskedasticity, multicollinearity, structural stability and functional form. Stepwise linear regression excluded non-significant variables. Significant variables were 'emergency' (1276.9), 'outlier' (6377.1), 'complexity' (3043.5), 'procedures' (317.4) and the constant (4492.7) (R(2)=0.21, SE=3598.3, F=14.39, Probpayment (constant). The model explained 21% of the variance in cost per patient. The payment rate is adjusted by a best practice annual admission rate per patient. The model is a blended RACFM for in-patient, out-patient, Hospital In The Home, Fee-For-Service Federal payments for drugs and medical services; lump sum lung transplant payments and risk sharing through cost (loss) outlier payments. State and Federally funded home and palliative services are 'carved out'. The model, which has national application via Coordinated Care Trials and by Australian States for RACFMs may be instructive for Germany, which plans to use Australian DRGs for casemix funding. The capitation alternative for chronic disease can improve equity, allocative efficiency and distributional justice. The use of Diagnostic Cost Groups (DCGs) is a promising alternative classification system for capitation arrangements.

  19. Specifications for adjusted cross section and covariance libraries based upon CSEWG fast reactor and dosimetry benchmarks

    International Nuclear Information System (INIS)

    Weisbin, C.R.; Marable, J.H.; Collins, P.J.; Cowan, C.L.; Peelle, R.W.; Salvatores, M.

    1979-06-01

    The present work proposes a specific plan of cross section library adjustment for fast reactor core physics analysis using information from fast reactor and dosimetry integral experiments and from differential data evaluations. This detailed exposition of the proposed approach is intended mainly to elicit review and criticism from scientists and engineers in the research, development, and design fields. This major attempt to develop useful adjusted libraries is based on the established benchmark integral data, accurate and well documented analysis techniques, sensitivities, and quantified uncertainties for nuclear data, integral experiment measurements, and calculational methodology. The adjustments to be obtained using these specifications are intended to produce an overall improvement in the least-squares sense in the quality of the data libraries, so that calculations of other similar systems using the adjusted data base with any credible method will produce results without much data-related bias. The adjustments obtained should provide specific recommendations to the data evaluation program to be weighed in the light of newer measurements, and also a vehicle for observing how the evaluation process is converging. This report specifies the calculational methodology to be used, the integral experiments to be employed initially, and the methods and integral experiment biases and uncertainties to be used. The sources of sensitivity coefficients, as well as the cross sections to be adjusted, are detailed. The formulae for sensitivity coefficients for fission spectral parameters are developed. A mathematical formulation of the least-square adjustment problem is given including biases and uncertainties in methods

  20. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.

  1. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  2. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Energy Technology Data Exchange (ETDEWEB)

    Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.

    2016-11-01

    This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)

  3. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Directory of Open Access Journals (Sweden)

    Henryk Ratajkiewicz

    2016-08-01

    Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.

  4. A unit root test based on smooth transitions and nonlinear adjustment

    OpenAIRE

    Hepsag, Aycan

    2017-01-01

    In this paper, we develop a new unit root testing procedure which considers jointly for structural breaks and nonlinear adjustment. The structural breaks are modelled by means of a logistic smooth transition function and nonlinear adjustment is modelled by means of an ESTAR model. The empirical size of test is quite close to the nominal one and in terms of power; the new unit root test is generally superior to the alternative test. The new unit root test presents good size properties and does...

  5. Adjusting kinematics and kinetics in a feedback-controlled toe walking model

    Directory of Open Access Journals (Sweden)

    Olenšek Andrej

    2012-08-01

    Full Text Available Abstract Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can

  6. Internet-Based Self-Help Intervention for ICD-11 Adjustment Disorder: Preliminary Findings.

    Science.gov (United States)

    Eimontas, Jonas; Rimsaite, Zivile; Gegieckaite, Goda; Zelviene, Paulina; Kazlauskas, Evaldas

    2018-06-01

    Adjustment disorder is one of the most diagnosed mental disorders. However, there is a lack of studies of specialized internet-based psychosocial interventions for adjustment disorder. We aimed to analyze the outcomes of an internet-based unguided self-help psychosocial intervention BADI for adjustment disorder in a two armed randomized controlled trial with a waiting list control group. In total 284 adult participants were randomized in this study. We measured adjustment disorder as a primary outcome, and psychological well-being as a secondary outcome at pre-intervention (T1) and one month after the intervention (T2). We found medium effect size of the intervention for the completer sample on adjustment disorder symptoms. Intervention was effective for those participants who used it at least one time in 30-day period. Our results revealed the potential of unguided internet-based self-help intervention for adjustment disorder. However, high dropout rates in the study limits the generalization of the outcomes of the intervention only to completers.

  7. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    Directory of Open Access Journals (Sweden)

    Yanfei Kou

    2016-01-01

    Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.

  8. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    Science.gov (United States)

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  9. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    Science.gov (United States)

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  10. Development and Validation of Perioperative Risk-Adjustment Models for Hip Fracture Repair, Total Hip Arthroplasty, and Total Knee Arthroplasty.

    Science.gov (United States)

    Schilling, Peter L; Bozic, Kevin J

    2016-01-06

    Comparing outcomes across providers requires risk-adjustment models that account for differences in case mix. The burden of data collection from the clinical record can make risk-adjusted outcomes difficult to measure. The purpose of this study was to develop risk-adjustment models for hip fracture repair (HFR), total hip arthroplasty (THA), and total knee arthroplasty (TKA) that weigh adequacy of risk adjustment against data-collection burden. We used data from the American College of Surgeons National Surgical Quality Improvement Program to create derivation cohorts for HFR (n = 7000), THA (n = 17,336), and TKA (n = 28,661). We developed logistic regression models for each procedure using age, sex, American Society of Anesthesiologists (ASA) physical status classification, comorbidities, laboratory values, and vital signs-based comorbidities as covariates, and validated the models with use of data from 2012. The derivation models' C-statistics for mortality were 80%, 81%, 75%, and 92% and for adverse events were 68%, 68%, 60%, and 70% for HFR, THA, TKA, and combined procedure cohorts. Age, sex, and ASA classification accounted for a large share of the explained variation in mortality (50%, 58%, 70%, and 67%) and adverse events (43%, 45%, 46%, and 68%). For THA and TKA, these three variables were nearly as predictive as models utilizing all covariates. HFR model discrimination improved with the addition of comorbidities and laboratory values; among the important covariates were functional status, low albumin, high creatinine, disseminated cancer, dyspnea, and body mass index. Model performance was similar in validation cohorts. Risk-adjustment models using data from health records demonstrated good discrimination and calibration for HFR, THA, and TKA. It is possible to provide adequate risk adjustment using only the most predictive variables commonly available within the clinical record. This finding helps to inform the trade-off between model performance and data

  11. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  12. Group-based parent training programmes for improving emotional and behavioural adjustment in young children.

    Science.gov (United States)

    Barlow, Jane; Bergman, Hanna; Kornør, Hege; Wei, Yinghui; Bennett, Cathy

    2016-08-01

    Emotional and behavioural problems in children are common. Research suggests that parenting has an important role to play in helping children to become well-adjusted, and that the first few months and years are especially important. Parenting programmes may have a role to play in improving the emotional and behavioural adjustment of infants and toddlers, and this review examined their effectiveness with parents and carers of young children. 1. To establish whether group-based parenting programmes are effective in improving the emotional and behavioural adjustment of young children (maximum mean age of three years and 11 months); and2. To assess whether parenting programmes are effective in the primary prevention of emotional and behavioural problems. In July 2015 we searched CENTRAL (the Cochrane Library), Ovid MEDLINE, Embase (Ovid), and 10 other databases. We also searched two trial registers and handsearched reference lists of included studies and relevant systematic reviews. Two reviewers independently assessed the records retrieved by the search. We included randomised controlled trials (RCTs) and quasi-RCTs of group-based parenting programmes that had used at least one standardised instrument to measure emotional and behavioural adjustment in children. One reviewer extracted data and a second reviewer checked the extracted data. We presented the results for each outcome in each study as standardised mean differences (SMDs) with 95% confidence intervals (CIs). Where appropriate, we combined the results in a meta-analysis using a random-effects model. We used the GRADE (Grades of Recommendations, Assessment, Development, and Evaluation) approach to assess the overall quality of the body of evidence for each outcome. We identified 22 RCTs and two quasi-RCTs evaluating the effectiveness of group-based parenting programmes in improving the emotional and behavioural adjustment of children aged up to three years and 11 months (maximum mean age three years 11 months

  13. EKF-GPR-Based Fingerprint Renovation for Subset-Based Indoor Localization with Adjusted Cosine Similarity.

    Science.gov (United States)

    Yang, Junhua; Li, Yong; Cheng, Wei; Liu, Yang; Liu, Chenxi

    2018-01-22

    Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, we describe a Fingerprint Renovation System (FRS) based on crowdsourcing, which avoids the use of manual labour to obtain the up-to-date fingerprint status. Extended Kalman Filter (EKF) and Gaussian Process Regression (GPR) in FRS are combined to calculate the current state based on the original fingerprinting radio map. In this system, a method of subset acquisition also makes an immediate impression to reduce the huge computation caused by too many reference points (RPs). Meanwhile, adjusted cosine similarity (ACS) is employed in the online phase to solve the issue of outliers produced by cosine similarity. Both experiments and analytical simulation in a real Wireless Fidelity (Wi-Fi) environment indicate the usefulness of our system to significant performance improvements. The results show that FRS improves the accuracy by 19.6% in the surveyed area compared to the radio map un-renovated. Moreover, the proposed subset algorithm can bring less computation.

  14. EKF–GPR-Based Fingerprint Renovation for Subset-Based Indoor Localization with Adjusted Cosine Similarity

    Science.gov (United States)

    Yang, Junhua; Li, Yong; Cheng, Wei; Liu, Yang; Liu, Chenxi

    2018-01-01

    Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, we describe a Fingerprint Renovation System (FRS) based on crowdsourcing, which avoids the use of manual labour to obtain the up-to-date fingerprint status. Extended Kalman Filter (EKF) and Gaussian Process Regression (GPR) in FRS are combined to calculate the current state based on the original fingerprinting radio map. In this system, a method of subset acquisition also makes an immediate impression to reduce the huge computation caused by too many reference points (RPs). Meanwhile, adjusted cosine similarity (ACS) is employed in the online phase to solve the issue of outliers produced by cosine similarity. Both experiments and analytical simulation in a real Wireless Fidelity (Wi-Fi) environment indicate the usefulness of our system to significant performance improvements. The results show that FRS improves the accuracy by 19.6% in the surveyed area compared to the radio map un-renovated. Moreover, the proposed subset algorithm can bring less computation. PMID:29361805

  15. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    Science.gov (United States)

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  16. A Newly Designed Fiber-Optic Based Earth Pressure Transducer with Adjustable Measurement Range

    Directory of Open Access Journals (Sweden)

    Hou-Zhen Wei

    2018-03-01

    Full Text Available A novel fiber-optic based earth pressure sensor (FPS with an adjustable measurement range and high sensitivity is developed to measure earth pressures for civil infrastructures. The new FPS combines a cantilever beam with fiber Bragg grating (FBG sensors and a flexible membrane. Compared with a traditional pressure transducer with a dual diaphragm design, the proposed FPS has a larger measurement range and shows high accuracy. The working principles, parameter design, fabrication methods, and laboratory calibration tests are explained in this paper. A theoretical solution is derived to obtain the relationship between the applied pressure and strain of the FBG sensors. In addition, a finite element model is established to analyze the mechanical behavior of the membrane and the cantilever beam and thereby obtain optimal parameters. The cantilever beam is 40 mm long, 15 mm wide, and 1 mm thick. The whole FPS has a diameter of 100 mm and a thickness of 30 mm. The sensitivity of the FPS is 0.104 kPa/με. In addition, automatic temperature compensation can be achieved. The FPS’s sensitivity, physical properties, and response to applied pressure are extensively examined through modeling and experiments. The results show that the proposed FPS has numerous potential applications in soil pressure measurement.

  17. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  18. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    Science.gov (United States)

    Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow

  19. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  20. Modeling and Predicting the EUR/USD Exchange Rate: The Role of Nonlinear Adjustments to Purchasing Power Parity

    OpenAIRE

    Jesús Crespo Cuaresma; Anna Orthofer

    2010-01-01

    Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...

  1. Method Based on Confidence Radius to Adjust the Location of Mobile Terminals

    DEFF Research Database (Denmark)

    García-Fernández, Juan Antonio; Jurado-Navas, Antonio; Fernández-Navarro, Mariano

    2017-01-01

    The present paper details a technique for adjusting in a smart manner the position estimates of any user equipment given by different geolocation/positioning methods in a wireless radiofrequency communication network based on different strategies (observed time difference of arrival , angle of ar...

  2. Prior use of durable medical equipment as a risk adjuster for health-based capitation

    NARCIS (Netherlands)

    R.C. van Kleef (Richard); R.C.J.A. van Vliet (René)

    2010-01-01

    textabstractThis paper examines a new risk adjuster for capitation payments to Dutch health plans, based on the prior use of durable medical equipment (DME). The essence is to classify users of DME in a previous year into clinically homogeneous classes and to apply the resulting classification as a

  3. Adjustments of microwave-based measurements on coal moisture using natural radioactivity techniques

    Energy Technology Data Exchange (ETDEWEB)

    Prieto-Fernandez, I.; Luengo-Garcia, J.C.; Alonso-Hidalgo, M.; Folgueras-Diaz, B. [University of Oviedo, Gijon (Spain)

    2006-01-07

    The use of nonconventional on-line measurements of moisture and ash content in coal is presented. The background research is briefly reviewed. The possibilities of adjusting microwave-based moisture measurements using natural radioactive techniques, and vice versa, are proposed. The results obtained from the simultaneous analysis of moisture and ash content as well as the correlation improvements are shown.

  4. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Science.gov (United States)

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  5. Development of a model for case-mix adjustment of pressure ulcer prevalence rates.

    NARCIS (Netherlands)

    Bours, G.J.J.W.; Halfens, J.; Berger, M.P.; Abu-Saad, H.H.; Grol, R.P.T.M.

    2003-01-01

    BACKGROUND: Acute care hospitals participating in the Dutch national pressure ulcer prevalence survey use the results of this survey to compare their outcomes and assess their quality of care regarding pressure ulcer prevention. The development of a model for case-mix adjustment is essential for the

  6. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

    Science.gov (United States)

    Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

    2016-06-01

    The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

  7. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

    Science.gov (United States)

    Schartner, Alina; Young, Tony Johnstone

    2016-01-01

    Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

  8. Testing an Attachment Model of Latina/o College Students' Psychological Adjustment

    Science.gov (United States)

    Garriott, Patton O.; Love, Keisha M.; Tyler, Kenneth M.; Thomas, Deneia M.; Roan-Belle, Clarissa R.; Brown, Carrie L.

    2010-01-01

    The present study examined the influence of attachment relationships on the psychological adjustment of Latina/o university students (N = 80) attending predominantly White institutions of higher education. A path analysis conducted to test a hypothesized model of parent and peer attachment, self-esteem, and psychological distress indicated that…

  9. Health economic modeling of the potential cost saving effects of Neurally Adjusted Ventilator Assist.

    Science.gov (United States)

    Hjelmgren, Jonas; Bruce Wirta, Sara; Huetson, Pernilla; Myrén, Karl-Johan; Göthberg, Sylvia

    2016-02-01

    Asynchrony between patient and ventilator breaths is associated with increased duration of mechanical ventilation (MV). Neurally Adjusted Ventilatory Assist (NAVA) controls MV through an esophageal reading of diaphragm electrical activity via a nasogastric tube mounted with electrode rings. NAVA has been shown to decrease asynchrony in comparison to pressure support ventilation (PSV). The objective of this study was to conduct a health economic evaluation of NAVA compared with PSV. We developed a model based on an indirect link between improved synchrony with NAVA versus PSV and fewer days spent on MV in synchronous patients. Unit costs for MV were obtained from the Swedish intensive care unit register, and used in the model along with NAVA-specific costs. The importance of each parameter (proportion of asynchronous patients, costs, and average MV duration) for the overall results was evaluated through sensitivity analyses. Base case results showed that 21% of patients ventilated with NAVA were asynchronous versus 52% of patients receiving PSV. This equals an absolute difference of 31% and an average of 1.7 days less on MV and a total cost saving of US$7886 (including NAVA catheter costs). A breakeven analysis suggested that NAVA was cost effective compared with PSV given an absolute difference in the proportion of asynchronous patients greater than 2.5% (49.5% versus 52% asynchronous patients with NAVA and PSV, respectively). The base case results were stable to changes in parameters, such as difference in asynchrony, duration of ventilation and daily intensive care unit costs. This study showed economically favorable results for NAVA versus PSV. Our results show that only a minor decrease in the proportion of asynchronous patients with NAVA is needed for investments to pay off and generate savings. Future studies need to confirm this result by directly relating improved synchrony to the number of days on MV. © The Author(s), 2015.

  10. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...

  11. Data analysis-based autonomic bandwidth adjustment in software defined multi-vendor optical transport networks.

    Science.gov (United States)

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Jing, Ruiquan

    2017-11-27

    Network operators generally provide dedicated lightpaths for customers to meet the demand for high-quality transmission. Considering the variation of traffic load, customers usually rent peak bandwidth that exceeds the practical average traffic requirement. In this case, bandwidth provisioning is unmetered and customers have to pay according to peak bandwidth. Supposing that network operators could keep track of traffic load and allocate bandwidth dynamically, bandwidth can be provided as a metered service and customers would pay for the bandwidth that they actually use. To achieve cost-effective bandwidth provisioning, this paper proposes an autonomic bandwidth adjustment scheme based on data analysis of traffic load. The scheme is implemented in a software defined networking (SDN) controller and is demonstrated in the field trial of multi-vendor optical transport networks. The field trial shows that the proposed scheme can track traffic load and realize autonomic bandwidth adjustment. In addition, a simulation experiment is conducted to evaluate the performance of the proposed scheme. We also investigate the impact of different parameters on autonomic bandwidth adjustment. Simulation results show that the step size and adjustment period have significant influences on bandwidth savings and packet loss. A small value of step size and adjustment period can bring more benefits by tracking traffic variation with high accuracy. For network operators, the scheme can serve as technical support of realizing bandwidth as metered service in the future.

  12. Energy-Saving Performance of Flap-Adjustment-Based Centrifugal Fan

    Directory of Open Access Journals (Sweden)

    Genglin Chen

    2018-01-01

    Full Text Available The current paper mainly focuses on finding a more appropriate way to enhance the fan performance at off-design conditions. The centrifugal fan (CF based on flap-adjustment (FA has been investigated through theoretical, experimental, and finite element methods. To obtain a more predominant performance of CF from the different adjustments, we carried out a comparative analysis on FA and leading-adjustment (LA in aerodynamic performances, which included the adjusted angle of blades, total pressure, efficiency, system-efficiency, adjustment-efficiency, and energy-saving rate. The contribution of this paper is the integrated performance curve of the CF. Finally, the results showed that the effects of FA and LA on economic performance and energy savings of the fan varied with the blade angles. Furthermore, FA was feasible, which is more sensitive than LA. Moreover, the CF with FA offered a more extended flow-range of high economic characteristic in comparison with LA. Finally, when the operation flow-range extends, energy-saving rate of the fan with FA would have improvement.

  13. Adjustment modes in the trajectory of progressive multiple sclerosis: a qualitative study and conceptual model.

    Science.gov (United States)

    Bogosian, Angeliki; Morgan, Myfanwy; Bishop, Felicity L; Day, Fern; Moss-Morris, Rona

    2017-03-01

    We examined cognitive and behavioural challenges and adaptations for people with progressive multiple sclerosis (MS) and developed a preliminary conceptual model of changes in adjustment over time. Using theoretical sampling, 34 semi-structured interviews were conducted with people with MS. Participants were between 41 and 77 years of age. Thirteen were diagnosed with primary progressive MS and 21 with secondary progressive MS. Data were analysed using a grounded theory approach. Participants described initially bracketing the illness off and carrying on their usual activities but this became problematic as the condition progressed and they employed different adjustment modes to cope with increased disabilities. Some scaled back their activities to live a more comfortable life, others identified new activities or adapted old ones, whereas at times, people disengaged from the adjustment process altogether and resigned to their condition. Relationships with partners, emotional reactions, environment and perception of the environment influenced adjustment, while people were often flexible and shifted among modes. Adjusting to a progressive condition is a fluid process. Future interventions can be tailored to address modifiable factors at different stages of the condition and may involve addressing emotional reactions concealing/revealing the condition and perceptions of the environment.

  14. POSITIONING BASED ON INTEGRATION OF MUTI-SENSOR SYSTEMS USING KALMAN FILTER AND LEAST SQUARE ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    M. Omidalizarandi

    2013-09-01

    Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.

  15. An adjustable multi-scale single beam acoustic tweezers based on ultrahigh frequency ultrasonic transducer.

    Science.gov (United States)

    Chen, Xiaoyang; Lam, Kwok Ho; Chen, Ruimin; Chen, Zeyu; Yu, Ping; Chen, Zhongping; Shung, K Kirk; Zhou, Qifa

    2017-11-01

    This paper reports the fabrication, characterization, and microparticle manipulation capability of an adjustable multi-scale single beam acoustic tweezers (SBAT) that is capable of flexibly changing the size of "tweezers" like ordinary metal tweezers with a single-element ultrahigh frequency (UHF) ultrasonic transducer. The measured resonant frequency of the developed transducer at 526 MHz is the highest frequency of piezoelectric single crystal based ultrasonic transducers ever reported. This focused UHF ultrasonic transducer exhibits a wide bandwidth (95.5% at -10 dB) due to high attenuation of high-frequency ultrasound wave, which allows the SBAT effectively excite with a wide range of excitation frequency from 150 to 400 MHz by using the "piezoelectric actuator" model. Through controlling the excitation frequency, the wavelength of ultrasound emitted from the SBAT can be changed to selectively manipulate a single microparticle of different sizes (3-100 μm) by using only one transducer. This concept of flexibly changing "tweezers" size is firstly introduced into the study of SBAT. At the same time, it was found that this incident ultrasound wavelength play an important role in lateral trapping and manipulation for microparticle of different sizes. Biotechnol. Bioeng. 2017;114: 2637-2647. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  17. Online dynamic equalization adjustment of high-power lithium-ion battery packs based on the state of balance estimation

    International Nuclear Information System (INIS)

    Wang, Shunli; Shang, Liping; Li, Zhanfeng; Deng, Hu; Li, Jianchao

    2016-01-01

    Highlights: • A novel concept (SOB, State of Balance) is proposed for the LIB pack equalization. • Core parameter detection and filtering is analyzed to identify the LIB pack behavior. • The electrical UKF model is adopted for the online dynamic estimation. • The equalization target model is built based on the optimum preference. • Comprehensive imbalance state calculation is implemented for the adjustment. - Abstract: A novel concept named as state of balance (SOB) is proposed and its online dynamic estimation method is presented for the high-power lithium-ion battery (LIB) packs, based on which the online dynamic equalization adjustment is realized aiming to protect the operation safety of its power supply application. The core parameter detection method based on the specific moving average algorithm is studied because of their identical varying characteristics on the individual cells due to the manufacturing variability and other factors, affecting the performance of the high-power LIB pack. The SOB estimation method is realized with the detailed deduction, in which a dual filter consisting of the Unscented Kalman filter (UKF), equivalent circuit model (ECM) and open circuit voltage (OCV) is used in order to predict the SOB state. It is beneficial for the energy operation and the energy performance state can be evaluated online prior to the adjustment method based on the terminal voltage consistency. The energy equalization is realized that is based on the credibility reasoning together with the equalization model building process. The experiments including the core parameter detection, SOB estimation and equalization adjustment are done and the experimental results are analyzed. The experiment results show that the numerical Coulomb efficiency is bigger than 95%. The cell voltage measurement error is less than 5 mV and the terminal voltage measurement error of the LIB pack is less than 1% FS. The measurement error of the battery discharge and charge

  18. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  19. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  20. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  1. Multidirectional flexible force sensors based on confined, self-adjusting carbon nanotube arrays

    Science.gov (United States)

    Lee, J.-I.; Pyo, Soonjae; Kim, Min-Ook; Kim, Jongbaeg

    2018-02-01

    We demonstrate a highly sensitive force sensor based on self-adjusting carbon nanotube (CNT) arrays. Aligned CNT arrays are directly synthesized on silicon microstructures by a space-confined growth technique which enables a facile self-adjusting contact. To afford flexibility and softness, the patterned microstructures with the integrated CNTs are embedded in polydimethylsiloxane structures. The sensing mechanism is based on variations in the contact resistance between the facing CNT arrays under the applied force. By finite element analysis, proper dimensions and positions for each component are determined. Further, high sensitivities up to 15.05%/mN of the proposed sensors were confirmed experimentally. Multidirectional sensing capability could also be achieved by designing multiple sets of sensing elements in a single sensor. The sensors show long-term operational stability, owing to the unique properties of the constituent CNTs, such as outstanding mechanical durability and elasticity.

  2. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  3. Estimation of emission adjustments from the application of four-dimensional data assimilation to photochemical air quality modeling

    International Nuclear Information System (INIS)

    Mendoza-Dominguez, A.; Russell, A.G.

    2001-01-01

    Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analysis, develops spatially and temporally varying concentration and sensitivity fields that account for chemical and physical processing, and receptor analysis is used to adjust source strengths. Proposed changes to domain-wide NO x , volatile organic compounds (VOCs) and CO emissions from anthropogenic sources and for VOC emissions from biogenic sources were estimated, as well as modifications to sources based on their spatial location (urban vs. rural areas). In general, domain-wide anthropogenic VOC emissions were increased approximately two times their base case level to best match observations, domain-wide anthropogenic NO x and biogenic VOC emissions (BEIS2 estimates) remained close to their base case value and domain-wide CO emissions were decreased. Adjustments for anthropogenic NO x emissions increased their level of uncertainty when adjustments were computed for mobile and area sources (or urban and rural sources) separately, due in part to the poor spatial resolution of the observation field of nitrogen-containing species. Estimated changes to CO emissions also suffer from poor spatial resolution of the measurements. Results suggest that rural anthropogenic VOC emissions appear to be severely underpredicted. The FDDA approach was also used to investigate the speciation profiles of VOC emissions, and results warrant revision of these profiles. In general, the results obtained here are consistent with what are viewed as the current deficiencies in emissions inventories as derived by other top-down techniques, such as tunnel studies and analysis of ambient measurements. (Author)

  4. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    Science.gov (United States)

    2012-11-01

    2012 4. TITLE AND SUBTITLE Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes 5a. CONTRACT NUMBER 5b. GRANT ...may be used to construct spatially varying wind fields for the GOM region (e.g., Thompson and Cardone [12]), but this requires using a complicated...Storm Damage Reduc- tion, and Dredging Operations and Environmental Research (DOER). The USACE Headquarters granted permission to publish this paper

  5. Conceptual Model for Simulating the Adjustments of Bankfull Characteristics in the Lower Yellow River, China

    Directory of Open Access Journals (Sweden)

    Yuanjian Wang

    2014-01-01

    Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.

  6. Risk-adjusted performance evaluation in three academic thoracic surgery units using the Eurolung risk models.

    Science.gov (United States)

    Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro

    2018-01-03

    The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  7. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  8. [The motive force of evolution based on the principle of organismal adjustment evolution.].

    Science.gov (United States)

    Cao, Jia-Shu

    2010-08-01

    From the analysis of the existing problems of the prevalent theories of evolution, this paper discussed the motive force of evolution based on the knowledge of the principle of organismal adjustment evolution to get a new understanding of the evolution mechanism. In the guide of Schrodinger's theory - "life feeds on negative entropy", the author proposed that "negative entropy flow" actually includes material flow, energy flow and information flow, and the "negative entropy flow" is the motive force for living and development. By modifying my own theory of principle of organismal adjustment evolution (not adaptation evolution), a new theory of "regulation system of organismal adjustment evolution involved in DNA, RNA and protein interacting with environment" is proposed. According to the view that phylogenetic development is the "integral" of individual development, the difference of negative entropy flow between organisms and environment is considered to be a motive force for evolution, which is a new understanding of the mechanism of evolution. Based on such understanding, evolution is regarded as "a changing process that one subsystem passes all or part of its genetic information to the next generation in a larger system, and during the adaptation process produces some new elements, stops some old ones, and thereby lasts in the larger system". Some other controversial questions related to evolution are also discussed.

  9. Dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses.

    Science.gov (United States)

    Zhang, Xiang; Loda, Justin B; Woodall, William H

    2017-07-20

    For a patient who has survived a surgery, there could be several levels of recovery. Thus, it is reasonable to consider more than two outcomes when monitoring surgical outcome quality. The risk-adjusted cumulative sum (CUSUM) chart based on multiresponses has been developed for monitoring a surgical process with three or more outcomes. However, there is a significant effect of varying risk distributions on the in-control performance of the chart when constant control limits are applied. To overcome this disadvantage, we apply the dynamic probability control limits to the risk-adjusted CUSUM charts for multiresponses. The simulation results demonstrate that the in-control performance of the charts with dynamic probability control limits can be controlled for different patient populations because these limits are determined for each specific sequence of patients. Thus, the use of dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses allows each chart to be designed for the corresponding patient sequence of a surgeon or a hospital and therefore does not require estimating or monitoring the patients' risk distribution. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Electromagnetic structure of pion in the framework of adjusted VMD model with elastic cut

    International Nuclear Information System (INIS)

    Dubnicka, S.; Furdik, I.; Meshcheryakov, V.A.

    1987-01-01

    The vector dominance model (VMD) parametrization of pion form factor is transformed into the pion c.m. momentum variable. Then the corresponding VMD poles are shifted by means of the nonzero widths of vector mesons from the real axis into the complex region of the second sheet of Riemann surface generated by the square-root two-pion-threshold branchpoint. A realistic description of all existing data is achieved in the framework of this adjusted VMD model and the presence of ρ'(1250) and ρ''(1600) mesons in e + e - →π + π - is confirmed by determination of their parameters directly from the fit of data

  11. Reduction of peak energy demand based on smart appliances energy consumption adjustment

    Science.gov (United States)

    Powroźnik, P.; Szulim, R.

    2017-08-01

    In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.

  12. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    Science.gov (United States)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  13. Alternative Payment Models Should Risk-Adjust for Conversion Total Hip Arthroplasty: A Propensity Score-Matched Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Schairer, William W; Schwarzkopf, Ran; Halsey, David A; Iorio, Richard; Padgett, Douglas E

    2017-12-06

    For Medicare beneficiaries, hospital reimbursement for nonrevision hip arthroplasty is anchored to either diagnosis-related group code 469 or 470. Under alternative payment models, reimbursement for care episodes is not further risk-adjusted. This study's purpose was to compare outcomes of primary total hip arthroplasty (THA) vs conversion THA to explore the rationale for risk adjustment for conversion procedures. All primary and conversion THAs from 2007 to 2014, excluding acute hip fractures and cancer patients, were identified in the National Surgical Quality Improvement Program database. Conversion and primary THA patients were matched 1:1 using propensity scores, based on preoperative covariates. Multivariable logistic regressions evaluated associations between conversion THA and 30-day outcomes. A total of 2018 conversions were matched to 2018 primaries. There were no differences in preoperative covariates. Conversions had longer operative times (148 vs 95 minutes, P reimbursement models shift toward bundled payment paradigms, conversion THA appears to be a procedure for which risk adjustment is appropriate. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Differentiation of Self and Dyadic Adjustment in Couple Relationships: A Dyadic Analysis Using the Actor-Partner Interdependence Model.

    Science.gov (United States)

    Lampis, Jessica; Cataudella, Stefania; Agus, Mirian; Busonera, Alessandra; Skowron, Elizabeth A

    2018-06-10

    Bowen's multigenerational theory provides an account of how the internalization of experiences within the family of origin promotes development of the ability to maintain a distinct self whilst also making intimate connections with others. Differentiated people can maintain their I-position in intimate relationships. They can remain calm in conflictual relationships, resolve relational problems effectively, and reach compromises. Fusion with others, emotional cut-off, and emotional reactivity instead are common reactions to relational stress in undifferentiated people. Emotional reactivity is the tendency to react to stressors with irrational and intense emotional arousal. Fusion with others is an excessive emotional involvement in significant relationships, whilst emotional cut-off is the tendency to manage relationship anxiety through physical and emotional distance. This study is based on Bowen's theory, starting from the assumption that dyadic adjustment can be affected both by a member's differentiation of self (actor effect) and by his or her partner's differentiation of self (partner effect). We used the Actor-Partner Interdependence Model to study the relationship between differentiation of self and dyadic adjustment in a convenience sample of 137 heterosexual Italian couples (nonindependent, dyadic data). The couples completed the Differentiation of Self Inventory and the Dyadic Adjustment Scale. Men's dyadic adjustment depended only on their personal I-position, whereas women's dyadic adjustment was affected by their personal I-position and emotional cut-off as well as by their partner's I-position and emotional cut-off. The empirical and clinical implications of the results are discussed. © 2018 Family Process Institute.

  15. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    Science.gov (United States)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local

  16. Performance Evaluation of Electronic Inductor-Based Adjustable Speed Drives with Respect to Line Current Interharmonics

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Blaabjerg, Frede

    2017-01-01

    Electronic Inductor (EI)-based front-end rectifiers have a large potential to become the prominent next generation of Active Front End (AFE) topology used in many applications including Adjustable Speed Drives (ASDs) for systems having unidirectional power flow. The EI-based ASD is mostly...... attractive due to its improved harmonic performance compared to a conventional ASD. In this digest, the input currents of the EI-based ASD are investigated and compared with the conventional ASDs with respect to interharmonics, which is an emerging power quality topic. First, the main causes...... of the interharmonic distortions in the ASD applications are analyzed under balanced and unbalanced load conditions. Thereafter, the key role of the EI at the DC stage is investigated in terms of high impedance and current harmonics transfer. Obtained experiments and simulations for both EI-based and conventional ASD...

  17. Simply Adjustable Sinusoidal Oscillator Based on Negative Three-Port Current Conveyors

    Directory of Open Access Journals (Sweden)

    R. Sotner

    2010-09-01

    Full Text Available The paper deals with sinusoidal oscillator employing two controlled second-generation negative-current conveyors and two capacitors. The proposed oscillator has a simple circuit configuration. Electronic (voltage adjusting of the oscillation frequency and condition of oscillation are possible. The presented circuit is verified in PSpice utilizing macro models of commercially available negative current conveyors. The circuit is also verified by experimental measurements. Important characteristics and drawbacks of the proposed circuit and influences of real active elements in the designed circuit are discussed in detail.

  18. Analisis Portofolio Optimum Saham Syariah Menggunakan Liquidity Adjusted Capital Asset Pricing Model (LCAPM

    Directory of Open Access Journals (Sweden)

    Nila Cahyati

    2015-04-01

    Full Text Available Investasi mempunyai karakteristik antara return dan resiko. Pembentukan portofolio optimal digunakan untuk memaksimalkan keuntungan dan meminimumkan resiko. Liquidity Adjusted Capital Asset Pricing Model (LCAPM merupakan metode pengembangan baru dari CAPM yang dipengaruhi likuiditas. Indikator likuiditas apabila digabungkan dengan metode CAPM dapat membantu memaksimalkan return dan meminimumkan resiko. Tujuan penelitian adalah membandingkan expected retun dan resiko saham serta mengetahui proporsi pada portofolio optimal. Sampel yang digunakan merupakan saham JII (Jakarta Islamic Index  periode Januari 2013 – November 2014. Hasil penelitian menunjukkan bahwa expected return portofolio LCAPM sebesar 0,0956 dengan resiko 0,0043 yang membentuk proporsi saham AALI (55,19% dan saham PGAS (44,81%.

  19. Asymmetric adjustment

    NARCIS (Netherlands)

    2010-01-01

    A method of adjusting a signal processing parameter for a first hearing aid and a second hearing aid forming parts of a binaural hearing aid system to be worn by a user is provided. The binaural hearing aid system comprises a user specific model representing a desired asymmetry between a first ear

  20. Time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust

    International Nuclear Information System (INIS)

    Zhou Shumin; Sun Yamin; Tang Bin

    2007-01-01

    In order to enhance the time synchronization quality of the distributed system, a time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust is proposed. The time-revise cycle and self-adjust process is introduced in the paper. The algorithm reduces network flow effectively and enhances the quality of clock-synchronization. (authors)

  1. Detection of superior genotype of fatty acid synthase in Korean native cattle by an environment-adjusted statistical model

    Directory of Open Access Journals (Sweden)

    Jea-Young Lee

    2017-06-01

    Full Text Available Objective This study examines the genetic factors influencing the phenotypes (four economic traits:oleic acid [C18:1], monounsaturated fatty acids, carcass weight, and marbling score of Hanwoo. Methods To enhance the accuracy of the genetic analysis, the study proposes a new statistical model that excludes environmental factors. A statistically adjusted, analysis of covariance model of environmental and genetic factors was developed, and estimated environmental effects (covariate effects of age and effects of calving farms were excluded from the model. Results The accuracy was compared before and after adjustment. The accuracy of the best single nucleotide polymorphism (SNP in C18:1 increased from 60.16% to 74.26%, and that of the two-factor interaction increased from 58.69% to 87.19%. Also, superior SNPs and SNP interactions were identified using the multifactor dimensionality reduction method in Table 1 to 4. Finally, high- and low-risk genotypes were compared based on their mean scores for each trait. Conclusion The proposed method significantly improved the analysis accuracy and identified superior gene-gene interactions and genotypes for each of the four economic traits of Hanwoo.

  2. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  3. Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques

    International Nuclear Information System (INIS)

    Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar

    2012-01-01

    Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.

  4. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  5. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  6. Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.

    Science.gov (United States)

    Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina

    2017-07-01

    Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  8. A Comparative Study of CAPM and Seven Factors Risk Adjusted Return Model

    Directory of Open Access Journals (Sweden)

    Madiha Riaz Bhatti

    2014-12-01

    Full Text Available This study is a comparison and contrast of the predictive powers of two asset pricing models: CAPM and seven factor risk-return adjusted model, to explain the cross section of stock rate of returns in the financial sector listed at Karachi Stock Exchange (KSE. To test the models daily returns from January 2013 to February 2014 have been taken and the excess returns of portfolios are regressed on explanatory variables. The results of the tested models indicate that the models are valid and applicable in the financial market of Pakistan during the period under study, as the intercepts are not significantly different from zero. It is consequently established from the findings that all the explanatory variables explain the stock returns in the financial sector of KSE. In addition, the results of this study show that addition of more explanatory variables to the single factor CAPM results in reasonably high values of R2. These results provide substantial support to fund managers, investors and financial analysts in making investment decisions.

  9. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  10. An Analysis of Missile Systems Cost Growth and Implementation of Acquisition Reform Initiatives Using a Hybrid Adjusted Cost Growth Model

    National Research Council Canada - National Science Library

    Abate, Christopher

    2004-01-01

    ...) data with a hybrid adjusted cost growth (ACG) model. In addition, an analysis of acquisition reform initiatives during the treatment period was conducted to determine if reform efforts impacted missile system cost growth. A pre-reform...

  11. The New York Sepsis Severity Score: Development of a Risk-Adjusted Severity Model for Sepsis.

    Science.gov (United States)

    Phillips, Gary S; Osborn, Tiffany M; Terry, Kathleen M; Gesten, Foster; Levy, Mitchell M; Lemeshow, Stanley

    2018-05-01

    In accordance with Rory's Regulations, hospitals across New York State developed and implemented protocols for sepsis recognition and treatment to reduce variations in evidence informed care and preventable mortality. The New York Department of Health sought to develop a risk assessment model for accurate and standardized hospital mortality comparisons of adult septic patients across institutions using case-mix adjustment. Retrospective evaluation of prospectively collected data. Data from 43,204 severe sepsis and septic shock patients from 179 hospitals across New York State were evaluated. Prospective data were submitted to a database from January 1, 2015, to December 31, 2015. None. Maximum likelihood logistic regression was used to estimate model coefficients used in the New York State risk model. The mortality probability was estimated using a logistic regression model. Variables to be included in the model were determined as part of the model-building process. Interactions between variables were included if they made clinical sense and if their p values were less than 0.05. Model development used a random sample of 90% of available patients and was validated using the remaining 10%. Hosmer-Lemeshow goodness of fit p values were considerably greater than 0.05, suggesting good calibration. Areas under the receiver operator curve in the developmental and validation subsets were 0.770 (95% CI, 0.765-0.775) and 0.773 (95% CI, 0.758-0.787), respectively, indicating good discrimination. Development and validation datasets had similar distributions of estimated mortality probabilities. Mortality increased with rising age, comorbidities, and lactate. The New York Sepsis Severity Score accurately estimated the probability of hospital mortality in severe sepsis and septic shock patients. It performed well with respect to calibration and discrimination. This sepsis-specific model provides an accurate, comprehensive method for standardized mortality comparison of adult

  12. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment

    Science.gov (United States)

    Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-01-01

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results’ accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method. PMID:29570627

  13. A statistical adjustment approach for climate projections of snow conditions in mountain regions using energy balance land surface models

    Science.gov (United States)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu

    2017-04-01

    Projections of future climate change have been increasingly called for lately, as the reality of climate change has been gradually accepted and societies and governments have started to plan upcoming mitigation and adaptation policies. In mountain regions such as the Alps or the Pyrenees, where winter tourism and hydropower production are large contributors to the regional revenue, particular attention is brought to current and future snow availability. The question of the vulnerability of mountain ecosystems as well as the occurrence of climate-related hazards such as avalanches and debris-flows is also under consideration. In order to generate projections of snow conditions, however, downscaling global climate models (GCMs) by using regional climate models (RCMs) is not sufficient to capture the fine-scale processes and thresholds at play. In particular, the altitudinal resolution matters, since the phase of precipitation is mainly controlled by the temperature which is altitude-dependent. Simulations from GCMs and RCMs moreover suffer from biases compared to local observations, due to their rather coarse spatial and altitudinal resolution, and often provide outputs at too coarse time resolution to drive impact models. RCM simulations must therefore be adjusted using empirical-statistical downscaling and error correction methods, before they can be used to drive specific models such as energy balance land surface models. In this study, time series of hourly temperature, precipitation, wind speed, humidity, and short- and longwave radiation were generated over the Pyrenees and the French Alps for the period 1950-2100, by using a new approach (named ADAMONT for ADjustment of RCM outputs to MOuNTain regions) based on quantile mapping applied to daily data, followed by time disaggregation accounting for weather patterns selection. We first introduce a thorough evaluation of the method using using model runs from the ALADIN RCM driven by a global reanalysis over the

  14. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  15. A novel micro-accelerometer with adjustable sensitivity based on resonant tunnelling diodes

    International Nuclear Information System (INIS)

    Ji-Jun, Xiong; Wen-Dong, Zhang; Kai-Qun, Wang; Hai-Yang, Mao

    2009-01-01

    Resonant tunnelling diodes (RTDs) have negative differential resistance effect, and the current-voltage characteristics change as a function of external stress, which is regarded as meso-piezoresistance effect of RTDs. In this paper, a novel micro-accelerometer based on AlAs/GaAs/In 0.1 Ga 0.9 As/GaAs/AlAs RTDs is designed and fabricated to be a four-beam-mass structure, and an RTD-Wheatstone bridge measurement system is established to test the basic properties of this novel accelerometer. According to the experimental results, the sensitivity of the RTD based micro-accelerometer is adjustable within a range of 3 orders when the bias voltage of the sensor changes. The largest sensitivity of this RTD based micro-accelerometer is 560.2025 mV/g which is about 10 times larger than that of silicon based micro piezoresistive accelerometer, while the smallest one is 1.49135 mV/g. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  16. ELECTRICAL CONDUCTIVITY OF SOYBEAN SEED CULTIVARS AND ADJUSTED MODELS OF LEAKAGE CURVES ALONG THE TIME

    Directory of Open Access Journals (Sweden)

    ADRIANA RITA SALINAS

    2010-01-01

    Full Text Available The objective of this work was to study the behavior of ten soybean [Glycine max (L. Merr.] cultivars using the electrical conductivity (EC test by the comparison of curves of the accumulative electrolyte leakage along the time and to establish the statistical model that allow the best adjust of the curves. Ten soybean cultivars were used and they were mechanically harvested in 2004 in the EEA Oliveros, Santa Fe, Argentina. Measurements of EC were made for 100 individual seeds of each cultivar during 20 hours of immersion at intervals of 1 hour using an equipment that permit an individual seed analysis (Seed Automatic Analyzer SAD 9000S. There were proposed two statistical models to study the EC along the time of the 10 cultivars studied using SAS Statistics Program, to select the model that better allow us to understand the EC behavior along the time. Model 1 allowed to make comparisons of EC along the time between cultivars and to study the influence of the production environment on the physiological quality of soybean seeds. The time to reach the stabilization of the EC must not be lower than 19 hours for the different cultivars.

  17. Family caregiver adjustment and stroke survivor impairment: A path analytic model.

    Science.gov (United States)

    Pendergrass, Anna; Hautzinger, Martin; Elliott, Timothy R; Schilling, Oliver; Becker, Clemens; Pfeiffer, Klaus

    2017-05-01

    Depressive symptoms are a common problem among family caregivers of stroke survivors. The purpose of this study was to examine the association between care recipient's impairment and caregiver depression, and determine the possible mediating effects of caregiver negative problem-orientation, mastery, and leisure time satisfaction. The evaluated model was derived from Pearlin's stress process model of caregiver adjustment. We analyzed baseline data from 122 strained family members who were assisting stroke survivors in Germany for a minimum of 6 months and who consented to participate in a randomized clinical trial. Depressive symptoms were measured with the Center for Epidemiological Studies Depression Scale. The cross-sectional data were analyzed using path analysis. The results show an adequate fit of the model to the data, χ2(1, N = 122) = 0.17, p = .68; comparative fit index = 1.00; root mean square error of approximation: p caregiver depressive symptoms. Results indicate that caregivers at risk for depression reported a negative problem orientation, low caregiving mastery, and low leisure time satisfaction. The situation is particularly affected by the frequency of stroke survivor problematic behavior, and by the degree of their impairments in activities of daily living. The findings provide empirical support for the Pearlin's stress model and emphasize how important it is to target these mediators in health promotion interventions for family caregivers of stroke survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. 12 CFR Appendix B to Part 3 - Risk-Based Capital Guidelines; Market Risk Adjustment

    Science.gov (United States)

    2010-01-01

    ...) The bank must have a risk control unit that reports directly to senior management and is independent... management systems at least annually. (c) Market risk factors. The bank's internal model must use risk.... Section 4. Internal Models (a) General. For risk-based capital purposes, a bank subject to this appendix...

  19. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  20. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  1. Voltage adjusting characteristics in terahertz transmission through Fabry-Pérot-based metamaterials

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2015-10-01

    Full Text Available Metallic electric split-ring resonators (SRRs with featured size in micrometer scale, which are connected by thin metal wires, are patterned to form a periodically distributed planar array. The arrayed metallic SRRs are fabricated on an n-doped gallium arsenide (n-GaAs layer grown directly over a semi-insulating gallium arsenide (SI-GaAs wafer. The patterned metal microstructures and n-GaAs layer construct a Schottky diode, which can support an external voltage applied to modify the device properties. The developed architectures present typical functional metamaterial characters, and thus is proposed to reveal voltage adjusting characteristics in the transmission of terahertz waves at normal incidence. We also demonstrate the terahertz transmission characteristics of the voltage controlled Fabry-Pérot-based metamaterial device, which is composed of arrayed metallic SRRs. To date, many metamaterials developed in earlier works have been used to regulate the transmission amplitude or phase at specific frequencies in terahertz wavelength range, which are mainly dominated by the inductance-capacitance (LC resonance mechanism. However, in our work, the external voltage controlled metamaterial device is developed, and the extraordinary transmission regulation characteristics based on both the Fabry-Pérot (FP resonance and relatively weak surface plasmon polariton (SPP resonance in 0.025-1.5 THz range, are presented. Our research therefore shows a potential application of the dual-mode-resonance-based metamaterial for improving terahertz transmission regulation.

  2. Monitoring risk-adjusted outcomes in congenital heart surgery: does the appropriateness of a risk model change with time?

    Science.gov (United States)

    Tsang, Victor T; Brown, Katherine L; Synnergren, Mats Johanssen; Kang, Nicholas; de Leval, Marc R; Gallivan, Steve; Utley, Martin

    2009-02-01

    Risk adjustment of outcomes in pediatric congenital heart surgery is challenging due to the great diversity in diagnoses and procedures. We have previously shown that variable life-adjusted display (VLAD) charts provide an effective graphic display of risk-adjusted outcomes in this specialty. A question arises as to whether the risk model used remains appropriate over time. We used a recently developed graphic technique to evaluate the performance of an existing risk model among those patients at a single center during 2000 to 2003 originally used in model development. We then compared the distribution of predicted risk among these patients with that among patients in 2004 to 2006. Finally, we constructed a VLAD chart of risk-adjusted outcomes for the latter period. Among 1083 patients between April 2000 and March 2003, the risk model performed well at predicted risks above 3%, underestimated mortality at 2% to 3% predicted risk, and overestimated mortality below 2% predicted risk. There was little difference in the distribution of predicted risk among these patients and among 903 patients between June 2004 and October 2006. Outcomes for the more recent period were appreciably better than those expected according to the risk model. This finding cannot be explained by any apparent bias in the risk model combined with changes in case-mix. Risk models can, and hopefully do, become out of date. There is scope for complacency in the risk-adjusted audit if the risk model used is not regularly recalibrated to reflect changing standards and expectations.

  3. Uncertainty study of the PWR pressure vessel fluence. Adjustment of the nuclear data base

    International Nuclear Information System (INIS)

    Kodeli, I.A.

    1994-01-01

    The code system devoted to the calculation of the sensitivity and uncertainty of of the neutron flux and reaction rates calculated by the transport codes, has been developed. Adjustment of the basic data to experimental results can be performed as well. Various sources of uncertainties can be taken into account, such as those due to the uncertainties in the cross-sections, response functions, fission spectrum and space distribution of neutron source, geometry and material composition uncertainties... One -As well as two- dimensional analysis can be performed. Linear perturbation theory is applied. The code system is sufficiently general to be used for various analysis in the fields of fission and fusion. The principal objective of our studies concerns the capsule dosimetry study realized in the framework of the 900 MWe PWR pressure vessel surveillance program. The analysis indicates that the present calculations, performed by the code TRIPOLI-2, using the ENDF/B-IV based, non-perturbed neutron cross-section library in 315 energy groups, allows to estimate the neutron flux and the reaction rates in the surveillance capsules and in the most calculated and measured reaction rates permits to reduce these uncertainties. The results obtained with the adjusted iron cross-sections, response functions and fission spectrum show that the agreement between the calculation and the experiment was improved to become within 10% approximately. The neutron flux deduced from the experiment is then extrapolated from the capsule to the most exposed pressure vessel location using the calculated lead factor. The uncertainty in this factor was estimated to be about 7%. (author). 39 refs., 52 figs., 30 tabs

  4. "Symptom-based insulin adjustment for glucose normalization" (SIGN) algorithm: a pilot study.

    Science.gov (United States)

    Lee, Joyce Yu-Chia; Tsou, Keith; Lim, Jiahui; Koh, Feaizen; Ong, Sooim; Wong, Sabrina

    2012-12-01

    Lack of self-monitoring of blood glucose (SMBG) records in actual practice settings continues to create therapeutic challenges for clinicians, especially in adjusting insulin therapy. In order to overcome this clinical obstacle, a "Symptom-based Insulin adjustment for Glucose Normalization" (SIGN) algorithm was developed to guide clinicians in caring for patients with uncontrolled type 2 diabetes who have few to no SMBG records. This study examined the clinical outcome and safety of the SIGN algorithm. Glycated hemoglobin (HbA1c), insulin usage, and insulin-related adverse effects of a total of 114 patients with uncontrolled type 2 diabetes who refused to use SMBG or performed SMBG once a day for less than three times per week were studied 3 months prior to the implementation of the algorithm and prospectively at every 3-month interval for a total of 6 months after the algorithm implementation. Patients with type 1 diabetes, nonadherence to diabetes medications, or who were not on insulin therapy at any time during the study period were excluded from this study. Mean HbA1c improved by 0.29% at 3 months (P = 0.015) and 0.41% at 6 months (P = 0.006) after algorithm implementation. A slight increase in HbA1c was observed when the algorithm was not implemented. There were no major hypoglycemic episodes. The number of minor hypoglycemic episodes was minimal with the majority of the cases due to irregular meal habits. The SIGN algorithm appeared to offer a viable and safe approach when managing uncontrolled patients with type 2 diabetes who have few to no SMBG records.

  5. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Science.gov (United States)

    Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland

    2013-01-01

    Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow. PMID:23669712

  6. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  7. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  8. Shaft adjuster

    Science.gov (United States)

    Harry, Herbert H.

    1989-01-01

    Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

  9. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  10. Location memory for dots in polygons versus cities in regions: evaluating the category adjustment model.

    Science.gov (United States)

    Friedman, Alinda; Montello, Daniel R; Burte, Heather

    2012-09-01

    We conducted 3 experiments to examine the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in circumstances in which the category boundaries were irregular schematized polygons made from outlines of maps. For the first time, accuracy was tested when only perceptual and/or existing long-term memory information about identical locations was cued. Participants from Alberta, Canada and California received 1 of 3 conditions: dots-only, in which a dot appeared within the polygon, and after a 4-s dynamic mask the empty polygon appeared and the participant indicated where the dot had been; dots-and-names, in which participants were told that the first polygon represented Alberta/California and that each dot was in the correct location for the city whose name appeared outside the polygon; and names-only, in which there was no first polygon, and participants clicked on the city locations from extant memory alone. Location recall in the dots-only and dots-and-names conditions did not differ from each other and had small but significant directional errors that pointed away from the centroids of the polygons. In contrast, the names-only condition had large and significant directional errors that pointed toward the centroids. Experiments 2 and 3 eliminated the distribution of stimuli and overall screen position as causal factors. The data suggest that in the "classic" category adjustment paradigm, it is difficult to determine a priori when Bayesian cue combination is applicable, making Bayesian analysis less useful as a theoretical approach to location estimation. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  11. Development of the adjusted nuclear cross-section library based on JENDL-3.2 for large FBR

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Ishikawa, Makoto; Numata, Kazuyuki

    1999-04-01

    JNC (and PNC) had developed the adjusted nuclear cross-section library in which the results of the JUPITER experiments were reflected. Using this adjusted library, the distinct improvement of the accuracy in nuclear design of FBR cores had been achieved. As a recent research, JNC develops a database of other integral data in addition to the JUPITER experiments, aiming at further improvement for accuracy and reliability. In 1991, the adjusted library based on JENDL-2, JFS-3-J2 (ADJ91R), was developed, and it has been used on the design research for FBR. As an evaluated nuclear library, however, JENDL-3.2 is recently used. Therefore, the authors developed an adjusted library based on JENDL-3.2 which is called JFS-3-J3.2(ADJ98). It is known that the adjusted library based on JENDL-2 overestimated the sodium void reactivity worth by 10-20%. It is expected that the adjusted library based on JENDL-3.2 solve the problem. The adjusted library JFS-3-J3.2(ADJ98) was produced with the same method as the adjusted library JFS-3-J2(ADJ91R) and used more integral parameters of JUPITER experiments than the adjusted library JFS-3-J2(ADJ91R). This report also describes the design accuracy estimation on a 600 MWe class FBR with the adjusted library JFS-3-J3.2(ADJ98). Its main nuclear design parameters (multiplication factor, burn-up reactivity loss, breeding ratio, etc.) except the sodium void reactivity worth which are calculated with the adjusted library JFS-3-J3.2(ADJ98) are almost the same as those predicted with JFS-3-J2(ADJ91R). As for the sodium void reactivity, the adjusted library JFS-3-J3.2(ADJ98) estimates about 4% smaller than the JFS-3-J2(ADJ91R) because of the change of the basic nuclear library from JENDL-2 to JENDL-3.2. (author)

  12. PV Array Driven Adjustable Speed Drive for a Lunar Base Heat Pump

    Science.gov (United States)

    Domijan, Alexander, Jr.; Buchh, Tariq Aslam

    1995-01-01

    A study of various aspects of Adjustable Speed Drives (ASD) is presented. A summary of the relative merits of different ASD systems presently in vogue is discussed. The advantages of using microcomputer based ASDs is now widely understood and accepted. Of the three most popular drive systems, namely the Induction Motor Drive, Switched Reluctance Motor Drive and Brushless DC Motor Drive, any one may be chosen. The choice would depend on the nature of the application and its requirements. The suitability of the above mentioned drive systems for a photovoltaic array driven ASD for an aerospace application are discussed. The discussion is based on the experience of the authors, various researchers and industry. In chapter 2 a PV array power supply scheme has been proposed, this scheme will have an enhanced reliability in addition to the other known advantages of the case where a stand alone PV array is feeding the heat pump. In chapter 3 the results of computer simulation of PV array driven induction motor drive system have been included. A discussion on these preliminary simulation results have also been included in this chapter. Chapter 4 includes a brief discussion on various control techniques for three phase induction motors. A discussion on different power devices and their various performance characteristics is given in Chapter 5.

  13. Do diagnosis-related group-based payments incentivise hospitals to adjust output mix?

    Science.gov (United States)

    Liang, Li-Lin

    2015-04-01

    This study investigates whether the diagnosis-related group (DRG)-based payment method motivates hospitals to adjust output mix in order to maximise profits. The hypothesis is that when there is an increase in profitability of a DRG, hospitals will increase the proportion of that DRG (own-price effects) and decrease those of other DRGs (cross-price effects), except in cases where there are scope economies in producing two different DRGs. This conjecture is tested in the context of the case payment scheme (CPS) under Taiwan's National Health Insurance programme over the period of July 1999 to December 2004. To tackle endogeneity of DRG profitability and treatment policy, a fixed-effects three-stage least squares method is applied. The results support the hypothesised own-price and cross-price effects, showing that DRGs which share similar resources appear to be complements rather substitutes. For-profit hospitals do not appear to be more responsive to DRG profitability, possibly because of their institutional characteristics and bonds with local communities. The key conclusion is that DRG-based payments will encourage a type of 'product-range' specialisation, which may improve hospital efficiency in the long run. However, further research is needed on how changes in output mix impact patient access and pay-outs of health insurance. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  15. Risk adjustment of health-care performance measures in a multinational register-based study: A pragmatic approach to a complicated topic

    Directory of Open Access Journals (Sweden)

    Tron Anders Moger

    2014-03-01

    Full Text Available Objectives: Health-care performance comparisons across countries are gaining popularity. In such comparisons, the risk adjustment methodology plays a key role for meaningful comparisons. However, comparisons may be complicated by the fact that not all participating countries are allowed to share their data across borders, meaning that only simple methods are easily used for the risk adjustment. In this study, we develop a pragmatic approach using patient-level register data from Finland, Hungary, Italy, Norway, and Sweden. Methods: Data on acute myocardial infarction patients were gathered from health-care registers in several countries. In addition to unadjusted estimates, we studied the effects of adjusting for age, gender, and a number of comorbidities. The stability of estimates for 90-day mortality and length of stay of the first hospital episode following diagnosis of acute myocardial infarction is studied graphically, using different choices of reference data. Logistic regression models are used for mortality, and negative binomial models are used for length of stay. Results: Results from the sensitivity analysis show that the various models of risk adjustment give similar results for the countries, with some exceptions for Hungary and Italy. Based on the results, in Finland and Hungary, the 90-day mortality after acute myocardial infarction is higher than in Italy, Norway, and Sweden. Conclusion: Health-care registers give encouraging possibilities to performance measurement and enable the comparison of entire patient populations between countries. Risk adjustment methodology is affected by the availability of data, and thus, the building of risk adjustment methodology must be transparent, especially when doing multinational comparative research. In that case, even basic methods of risk adjustment may still be valuable.

  16. Positive Adjustment Among American Repatriated Prisoners of the Vietnam War: Modeling the Long-Term Effects of Captivity.

    Science.gov (United States)

    King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M

    2015-11-01

    A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.

  17. [Construction and validation of a multidimensional model of students' adjustment to college context].

    Science.gov (United States)

    Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S

    2006-05-01

    This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process.

  18. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  19. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  20. Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care

    Science.gov (United States)

    Dalby, Dawn M; Hirdes, John P; Fries, Brant E

    2005-01-01

    Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the

  1. Is Weight-Based Adjustment of Automatic Exposure Control Necessary for the Reduction of Chest CT Radiation Dose?

    Science.gov (United States)

    Prakash, Priyanka; Gilman, Matthew D.; Shepard, Jo-Anne O.; Digumarthy, Subba R.

    2010-01-01

    Objective To assess the effects of radiation dose reduction in the chest CT using a weight-based adjustment of the automatic exposure control (AEC) technique. Materials and Methods With Institutional Review Board Approval, 60 patients (mean age, 59.1 years; M:F = 35:25) and 57 weight-matched patients (mean age, 52.3 years, M:F = 25:32) were scanned using a weight-adjusted AEC and non-weight-adjusted AEC, respectively on a 64-slice multidetector CT with a 0.984:1 pitch, 0.5 second rotation time, 40 mm table feed/rotation, and 2.5 mm section thickness. Patients were categorized into 3 weight categories; 90 kg (n = 48). Patient weights, scanning parameters, CT dose index volumes (CTDIvol) and dose length product (DLP) were recorded, while effective dose (ED) was estimated. Image noise was measured in the descending thoracic aorta. Data were analyzed using a standard statistical package (SAS/STAT) (Version 9.1, SAS institute Inc, Cary, NC). Results Compared to the non-weight-adjusted AEC, the weight-adjusted AEC technique resulted in an average decrease of 29% in CTDIvol and a 27% effective dose reduction (p 91 kg weight groups, respectively, compared to 20.3, 27.9 and 32.8 mGy, with non-weight-adjusted AEC. No significant difference was observed for objective image noise between the chest CT acquired with the non-weight-adjusted (15.0 ± 3.1) and weight-adjusted (16.1 ± 5.6) AEC techniques (p > 0.05). Conclusion The results of this study suggest that AEC should be tailored according to patient weight. Without weight-based adjustment of AEC, patients are exposed to a 17 - 43% higher radiation-dose from a chest CT. PMID:20046494

  2. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Science.gov (United States)

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  3. Spatial Analysis of Land Adjustment as a Rehabilitation Base of Mangrove in Indramayu Regency

    Science.gov (United States)

    Sodikin; Sitorus, S. R. P.; Prasetyo, L. B.; Kusmana, C.

    2018-02-01

    Indramayu Regency is the area that has the largest mangrove in West Java. According to the environment and forestry ministry of Indramayu district will be targeted to be the central area of mangrove Indonesia. Mangroves in the regency from the 1990s have experienced a significant decline caused by the conversion of mangrove land into ponds and settlements. To stop the mangrove decline that continues to occur, it is necessary to rehabilitate mangroves in the area. The rehabilitation of mangrove should be in the area suitable for mangrove growth and what kind of vegetation analysis is appropriate to plant the area, so the purpose of this research is to analyze the suitability of land for mangrove in Indramayu Regency. This research uses geographic information system with overlay technique, while the data used in this research is tidal map of sea water, salintas map, land ph map, soil texture map, sea level rise map, land use map, community participation level map, and Map of organic soil. Then overlay and adjusted to matrix environmental parameters for mangrove growth. Based on the results of the analysis is known that in Indramayu District there are 5 types of mangroves that fit among others Bruguera, Soneratia, Nypah, Rhizophora, and Avicennia. The area of each area is Bruguera with an area of 6260 ha, 2958 ha, nypah 1756 ha, Rhizophora 936, and Avicennia 433 ha.

  4. Study on Electricity Business Expansion and Electricity Sales Based on Seasonal Adjustment

    Science.gov (United States)

    Zhang, Yumin; Han, Xueshan; Wang, Yong; Zhang, Li; Yang, Guangsen; Sun, Donglei; Wang, Bolun

    2017-05-01

    [1] proposed a novel analysis and forecast method of electricity business expansion based on Seasonal Adjustment, we extend this work to include the effect the micro and macro aspects, respectively. From micro aspect, we introduce the concept of load factor to forecast the stable value of electricity consumption of single new consumer after the installation of new capacity of the high-voltage transformer. From macro aspects, considering the growth of business expanding is also stimulated by the growth of electricity sales, it is necessary to analyse the antecedent relationship between business expanding and electricity sales. First, forecast electricity consumption of customer group and release rules of expanding capacity, respectively. Second, contrast the degree of fitting and prediction accuracy to find out the antecedence relationship and analyse the reason. Also, it can be used as a contrast to observe the influence of customer group in different ranges on the prediction precision. Finally, Simulation results indicate that the proposed method is accurate to help determine the value of expanding capacity and electricity consumption.

  5. Energy efficiency of China's industry sector: An adjusted network DEA (data envelopment analysis)-based decomposition analysis

    International Nuclear Information System (INIS)

    Liu, Yingnan; Wang, Ke

    2015-01-01

    The process of energy conservation and emission reduction in China requires the specific and accurate evaluation of the energy efficiency of the industry sector because this sector accounts for 70 percent of China's total energy consumption. Previous studies have used a “black box” DEA (data envelopment analysis) model to obtain the energy efficiency without considering the inner structure of the industry sector. However, differences in the properties of energy utilization (final consumption or intermediate conversion) in different industry departments may lead to bias in energy efficiency measures under such “black box” evaluation structures. Using the network DEA model and efficiency decomposition technique, this study proposes an adjusted energy efficiency evaluation model that can characterize the inner structure and associated energy utilization properties of the industry sector so as to avoid evaluation bias. By separating the energy-producing department and energy-consuming department, this adjusted evaluation model was then applied to evaluate the energy efficiency of China's provincial industry sector. - Highlights: • An adjusted network DEA (data envelopment analysis) model for energy efficiency evaluation is proposed. • The inner structure of industry sector is taken into account for energy efficiency evaluation. • Energy final consumption and energy intermediate conversion processes are separately modeled. • China's provincial industry energy efficiency is measured through the adjusted model.

  6. Linear identification and model adjustment of a PEM fuel cell stack

    Energy Technology Data Exchange (ETDEWEB)

    Kunusch, C; Puleston, P F; More, J J [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M A [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)

    2008-07-15

    In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)

  7. Comparison of Two Foreign Body Retrieval Devices with Adjustable Loops in a Swine Model

    International Nuclear Information System (INIS)

    Konya, Andras

    2006-01-01

    The purpose of the study was to compare two similar foreign body retrieval devices, the Texan TM (TX) and the Texan LONGhorn TM (TX-LG), in a swine model. Both devices feature a ≤30-mm adjustable loop. Capture times and total procedure times for retrieving foreign bodies from the infrarenal aorta, inferior vena cava, and stomach were compared. All attempts with both devices (TX, n = 15; TX-LG, n = 14) were successful. Foreign bodies in the vasculature were captured quickly using both devices (mean ± SD, 88 ± 106 sec for TX vs 67 ± 42 sec for TX-LG) with no significant difference between them. The TX-LG, however, allowed significantly better capture times than the TX in the stomach (p = 0.022), Overall, capture times for the TX-LG were significantly better than for the TX (p = 0.029). There was no significant difference between the total procedure times in any anatomic region. TX-LG performed significantly better than the TX in the stomach and therefore overall. The better torque control and maneuverability of TX-LG resulted in better performance in large anatomic spaces

  8. Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25

    International Nuclear Information System (INIS)

    Larreteguy, A.E; Mazufri, C.M

    2000-01-01

    The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop

  9. Homoclinic connections and subcritical Neimark bifurcation in a duopoly model with adaptively adjusted productions

    International Nuclear Information System (INIS)

    Agliari, Anna

    2006-01-01

    In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed

  10. Computer-aided system of evaluation for population-based all-in-one service screening (CASE-PASS): from study design to outcome analysis with bias adjustment.

    Science.gov (United States)

    Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi

    2010-10-01

    Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening

  11. ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.

    Science.gov (United States)

    Mognon, Andrea; Jovicich, Jorge; Bruzzone, Lorenzo; Buiatti, Marco

    2011-02-01

    A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST's classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory event-related potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal. Copyright © 2010 Society for Psychophysiological Research.

  12. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Directory of Open Access Journals (Sweden)

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  13. [Adjustment of the Andersen's model to the Mexican context: access to prenatal care].

    Science.gov (United States)

    Tamez-González, Silvia; Valle-Arcos, Rosa Irene; Eibenschutz-Hartman, Catalina; Méndez-Ramírez, Ignacio

    2006-01-01

    The aim of this work was to propose an adjustment to the Model of Andersen who answers better to the social inequality of the population in the Mexico City and allows to evaluate the effect of socioeconomic factors in the access to the prenatal care of a sample stratified according to degree of marginalization. The data come from a study of 663 women, randomly selected from a framework sample of 21,421 homes in Mexico City. This work collects information about factors that affect utilization of health services, as well as predisposing factors (age and socioeconomic level), as enabling factors (education, social support, entitlement, pay out of pocket and opinion of health services), and need factors. The sample was ranked according to exclusion variables into three stratums. The data were analyzed through the technique of path analysis. The results indicate that socioeconomic level takes part like predisposed variable for utilization of prenatal care services into three stratums. Otherwise, education and social support were the most important enabling variables for utilization of prenatal care services in the same three groups. In regard to low stratum, the most important enabling variables were education and entitlement. For high stratum the principal enabling variables were pay out of pocket and social support. The medium stratum shows atypical behavior which it was difficult to explain and understand. There was not mediating role with need variable in three models. This indicated absence of equality in all stratums. However, the most correlations in high stratum perhaps indicate less inequitable conditions regarding other stratums.

  14. A risk-adjusted financial model to estimate the cost of a video-assisted thoracoscopic surgery lobectomy programme.

    Science.gov (United States)

    Brunelli, Alessandro; Tentzeris, Vasileios; Sandri, Alberto; McKenna, Alexandra; Liew, Shan Liung; Milton, Richard; Chaudhuri, Nilanjan; Kefaloyannis, Emmanuel; Papagiannopoulos, Kostas

    2016-05-01

    To develop a clinically risk-adjusted financial model to estimate the cost associated with a video-assisted thoracoscopic surgery (VATS) lobectomy programme. Prospectively collected data of 236 VATS lobectomy patients (August 2012-December 2013) were analysed retrospectively. Fixed and variable intraoperative and postoperative costs were retrieved from the Hospital Accounting Department. Baseline and surgical variables were tested for a possible association with total cost using a multivariable linear regression and bootstrap analyses. Costs were calculated in GBP and expressed in Euros (EUR:GBP exchange rate 1.4). The average total cost of a VATS lobectomy was €11 368 (range €6992-€62 535). Average intraoperative (including surgical and anaesthetic time, overhead, disposable materials) and postoperative costs [including ward stay, high dependency unit (HDU) or intensive care unit (ICU) and variable costs associated with management of complications] were €8226 (range €5656-€13 296) and €3029 (range €529-€51 970), respectively. The following variables remained reliably associated with total costs after linear regression analysis and bootstrap: carbon monoxide lung diffusion capacity (DLCO) 0.05) in 86% of the samples. A hypothetical patient with COPD and DLCO less than 60% would cost €4270 more than a patient without COPD and with higher DLCO values (€14 793 vs €10 523). Risk-adjusting financial data can help estimate the total cost associated with VATS lobectomy based on clinical factors. This model can be used to audit the internal financial performance of a VATS lobectomy programme for budgeting, planning and for appropriate bundled payment reimbursements. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  15. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  16. An internet-based intervention for adjustment disorder (TAO): study protocol for a randomized controlled trial.

    Science.gov (United States)

    Rachyla, Iryna; Pérez-Ara, Marian; Molés, Mar; Campos, Daniel; Mira, Adriana; Botella, Cristina; Quero, Soledad

    2018-05-31

    Adjustment Disorder (AjD) is a common and disabling mental health problem. The lack of research on this disorder has led to the absence of evidence-based interventions for its treatment. Moreover, because the available data indicate that a high percentage of people with mental illness are not treated, it is necessary to develop new ways to provide psychological assistance. The present study describes a Randomized Controlled Trial (RCT) aimed at assessing the effectiveness and acceptance of a linear internet-delivered cognitive-behavioral therapy (ICBT) intervention for AjD. A two-armed RCT was designed to compare an intervention group to a waiting list control group. Participants from the intervention group will receive TAO, an internet-based program for AjD composed of seven modules. TAO combines CBT and Positive Psychology strategies in order to provide patients with complete support, reducing their clinical symptoms and enhancing their capacity to overcome everyday adversity. Participants will also receive short weekly telephone support. Participants in the control group will be assessed before and after a seven-week waiting period, and then they will be offered the same intervention. Participants will be randomly assigned to one of the 2 groups. Measurements will be taken at five different moments: baseline, post-intervention, and three follow-up periods (3-, 6- and 12-month). BDI-II and BAI will be used as primary outcome measures. Secondary outcomes will be symptoms of AjD, posttraumatic growth, positive and negative affect, and quality of life. The development of ICBT programs like TAO responds to a need for evidence-based interventions that can reach most of the people who need them, reducing the burden and cost of mental disorders. More specifically, TAO targets AjD and will entail a step forward in the treatment of this prevalent but under-researched disorder. Finally, it should be noted that this is the first RCT focusing on an internet-based

  17. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  18. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    National Research Council Canada - National Science Library

    Lee, John

    2000-01-01

    ...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...

  19. Using Multilevel Modeling to Assess Case-Mix Adjusters in Consumer Experience Surveys in Health Care

    NARCIS (Netherlands)

    Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  20. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. A phased transition to a market adjustment of the pseudo model of Russian economy

    Directory of Open Access Journals (Sweden)

    N. Komkov

    2015-01-01

    Full Text Available We consider a phased reform of the economic model of Russia. In less than one century, Russia was in the extreme conditions of the model economy: the developed socialism (1917 and perfect capitalism (1991. Within each of them there was the instability of socio-economic development: economic recovery alternated recession and huge reserves of natural resources and to develop and use of land is not always effective. At each extremity of the selection was based largely on the current political aims and attitudes formed by various social groups. Russia achieved the economic situation and the prevailing socio-economic model of many subjected to fair criticism. To improve it proposes a phased approach to reform, when the main focus is on "how" to move to a new state. The approach is based on consideration of the scenario approach to the reform of the basic components of the economic model that involves the formation of a better scenario analysis and evaluation of the expert community the degree of closeness of planned versions of the model national development objectives of the country.

  2. Adjustment of cast metal post/cores modeled with different acrylic resins

    OpenAIRE

    Gusmão, João Milton Rocha; Pereira, Renato Piai; Alves, Guilhermino Oliveira; Pithon, Matheus Melo; Moreira, David Costa

    2016-01-01

    Aim: Evaluate the performance of four commercially available chemically-activated acrylic resins (CAARs) by measuring the level of displacement of the cores following casting. Materials and Methods: Two devices were constructed to model the cores based on a natural tooth. Forty post/cores were modeled, 10 in each of the following CAARs: Duralay (Reliance Dental, Illinois, USA), Pattern Resin (GC, Tokyo, Japan), Dencrilay (Dencril, Sao Paulo, Brazil), and Jet (Clássico, Sao Paulo, Brazil). Two...

  3. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  4. Transmission History Based Distributed Adaptive Contention Window Adjustment Algorithm Cooperating with Automatic Rate Fallback for Wireless LANs

    Science.gov (United States)

    Ogawa, Masakatsu; Hiraguri, Takefumi; Nishimori, Kentaro; Takaya, Kazuhiro; Murakawa, Kazuo

    This paper proposes and investigates a distributed adaptive contention window adjustment algorithm based on the transmission history for wireless LANs called the transmission-history-based distributed adaptive contention window adjustment (THAW) algorithm. The objective of this paper is to reduce the transmission delay and improve the channel throughput compared to conventional algorithms. The feature of THAW is that it adaptively adjusts the initial contention window (CWinit) size in the binary exponential backoff (BEB) algorithm used in the IEEE 802.11 standard according to the transmission history and the automatic rate fallback (ARF) algorithm, which is the most basic algorithm in automatic rate controls. This effect is to keep CWinit at a high value in a congested state. Simulation results show that the THAW algorithm outperforms the conventional algorithms in terms of the channel throughput and delay, even if the timer in the ARF is changed.

  5. School-Based Racial and Gender Discrimination among African American Adolescents: Exploring Gender Variation in Frequency and Implications for Adjustment

    Science.gov (United States)

    Chavous, Tabbye M.; Griffin, Tiffany M.

    2012-01-01

    The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple regression analyses within gender groups indicated that among girls and boys, racial discrimination and gender discrimination predicted higher depressive symptoms and school importance and racial discrimination predicted self-esteem. Racial and gender discrimination were also negatively associated with grade point average among boys but were not significantly associated in girls’ analyses. Significant gender discrimination X racial discrimination interactions resulted in the girls’ models predicting psychological outcomes and in boys’ models predicting academic achievement. Taken together, findings suggest the importance of considering gender- and race-related experiences in understanding academic and psychological adjustment among African American adolescents. PMID:22837794

  6. School-Based Racial and Gender Discrimination among African American Adolescents: Exploring Gender Variation in Frequency and Implications for Adjustment.

    Science.gov (United States)

    Cogburn, Courtney D; Chavous, Tabbye M; Griffin, Tiffany M

    2011-01-03

    The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple regression analyses within gender groups indicated that among girls and boys, racial discrimination and gender discrimination predicted higher depressive symptoms and school importance and racial discrimination predicted self-esteem. Racial and gender discrimination were also negatively associated with grade point average among boys but were not significantly associated in girls' analyses. Significant gender discrimination X racial discrimination interactions resulted in the girls' models predicting psychological outcomes and in boys' models predicting academic achievement. Taken together, findings suggest the importance of considering gender- and race-related experiences in understanding academic and psychological adjustment among African American adolescents.

  7. Is Weight-Based Adjustment of Automatic Exposure Control Necessary for the Reduction of Chest CT Radiation Dose?

    Energy Technology Data Exchange (ETDEWEB)

    Prakash, Priyanka; Kalra, Mannudeep K.; Gilman, Matthew D.; Shepard, Jo Anne O.; Digumarthy, Subba R. [Massachusetts General Hospital and Harvard Medical School, Boston (United States)

    2010-02-15

    To assess the effects of radiation dose reduction in the chest CT using a weight-based adjustment of the automatic exposure control (AEC) technique. With Institutional Review Board Approval, 60 patients (mean age, 59.1 years; M:F = 35:25) and 57 weight-matched patients (mean age, 52.3 years, M:F = 25:32) were scanned using a weight-adjusted AEC and nonweight- adjusted AEC, respectively on a 64-slice multidetector CT with a 0.984:1 pitch, 0.5 second rotation time, 40 mm table feed/rotation, and 2.5 mm section thickness. Patients were categorized into 3 weight categories; < 60 kg (n = 17), 60-90 kg (n = 52), and > 90 kg (n = 48). Patient weights, scanning parameters, CT dose index volumes (CTDIvol) and dose length product (DLP) were recorded, while effective dose (ED) was estimated. Image noise was measured in the descending thoracic aorta. Data were analyzed using a standard statistical package (SAS/STAT) (Version 9.1, SAS institute Inc, Cary, NC). Compared to the non-weight-adjusted AEC, the weight-adjusted AEC technique resulted in an average decrease of 29% in CTDIvol and a 27% effective dose reduction (p < 0.0001). With weight-adjusted AEC, the CTDIvol decreased to 15.8, 15.9, and 27.3 mGy for the < 60, 60-90 and > 91 kg weight groups, respectively, compared to 20.3, 27.9 and 32.8 mGy, with non-weight adjusted AEC. No significant difference was observed for objective image noise between the chest CT acquired with the non-weight-adjusted (15.0 {+-} 3.1) and weight-adjusted (16.1 {+-} 5.6) AEC techniques (p > 0.05). The results of this study suggest that AEC should be tailored according to patient weight. Without weight-based adjustment of AEC, patients are exposed to a 17 - 43% higher radiation-dose from a chest CT.

  8. Melt-processable hydrophobic acrylonitrile-based copolymer systems with adjustable elastic properties designed for biomedical applications.

    Science.gov (United States)

    Cui, J; Trescher, K; Kratz, K; Jung, F; Hiebl, B; Lendlein, A

    2010-01-01

    Acrylonitrile-based polymer systems (PAN) are comprehensively explored as versatile biomaterials having various potential biomedical applications, such as membranes for extra corporal devices or matrixes for guided skin reconstruction. The surface properties (e.g. hydrophilicity or charges) of such materials can be tailored over a wide range by variation of molecular parameters such as different co-monomers or their sequence structure. Some of these materials show interesting biofunctionalities such as capability for selective cell cultivation. So far, the majority of AN-based copolymers, which were investigated in physiological environments, were processed from the solution (e.g. membranes), as these materials are thermo-sensitive and might degrade when heated. In this work we aimed at the synthesis of hydrophobic, melt-processable AN-based copolymers with adjustable elastic properties for preparation of model scaffolds with controlled pore geometry and size. For this purpose a series of copolymers from acrylonitrile and n-butyl acrylate (nBA) was synthesized via free radical copolymerisation technique. The content of nBA in the copolymer varied from 45 wt% to 70 wt%, which was confirmed by 1H-NMR spectroscopy. The glass transition temperatures (Tg) of the P(AN-co-nBA) copolymers determined by differential scanning calorimetry (DSC) decreased from 58 degrees C to 20 degrees C with increasing nBA-content, which was in excellent agreement with the prediction of the Gordon-Taylor equation based on the Tgs of the homopolymers. The Young's modulus obtained in tensile tests was found to decrease significantly with rising nBA-content from 1062 MPa to 1.2 MPa. All copolymers could be successfully processed from the melt with processing temperatures ranging from 50 degrees C to 170 degrees C, whereby thermally induced decomposition was only observed at temperatures higher than 320 degrees C in thermal gravimetric analysis (TGA). Finally, the melt processed P

  9. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    International Nuclear Information System (INIS)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-01-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  10. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Science.gov (United States)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  11. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  12. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    Science.gov (United States)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  13. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  14. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  15. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    Science.gov (United States)

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  16. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    Directory of Open Access Journals (Sweden)

    Meiqin Liu

    2017-12-01

    Full Text Available Underwater wireless sensor networks (UWSNs can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  17. Seat Adjustment Design of an Intelligent Robotic Wheelchair Based on the Stewart Platform

    Directory of Open Access Journals (Sweden)

    Po Er Hsu

    2013-03-01

    Full Text Available A wheelchair user makes direct contact with the wheelchair seat, which serves as the interface between the user and the wheelchair, for much of any given day. Seat adjustment design is of crucial importance in providing proper seating posture and comfort. This paper presents a multiple-DOF (degrees of freedom seat adjustment mechanism, which is intended to increase the independence of the wheelchair user while maintaining a concise structure, light weight, and intuitive control interface. This four-axis Stewart platform is capable of heaving, pitching, and swaying to provide seat elevation, tilt-in-space, and sideways movement functions. The geometry and types of joints of this mechanism are carefully arranged so that only one actuator needs to be controlled, enabling the wheelchair user to adjust the seat by simply pressing a button. The seat is also equipped with soft pressure-sensing pads to provide pressure management by adjusting the seat mechanism once continuous and concentrated pressure is detected. Finally, by comparing with the manual wheelchair, the proposed mechanism demonstrated the easier and more convenient operation with less effort for transfer assistance.

  18. Psychological adjustment to amputation: variations on the bases of sex, age and cause of limb loss

    International Nuclear Information System (INIS)

    Ali, S.; Haider, S.K.F.

    2017-01-01

    Amputation is the removal of a limb or part of a limb by a surgical procedure in order to save the life of a person. The underlying reasons behind the occurrence of this tragic incidence may be varied. However, irrespective of its cause limb loss is associated with wide range of life challenges. The study was done to investigate the psychological sequel of an individual after losing a limb and to know the level of strain and pressure they experience after this traumatic event. It also attempts to examine the moderating role of some demographic traits such as age, sex and cause of limb loss in psychosocial adjustment to amputation. Methods: The study includes 100 adult amputees of both genders and the data was collected from major government and private hospitals of Peshawar district. Demographic data sheet was constructed in order to know the demographics traits of amputees and a standardize Psychological Adjustment Scale developed by Sabir (1999) was used to find out the level of psychological adjustment after limb loss. Results: Nearly all the amputees' exhibit signs of psychological maladjustment at varying degrees. Males showed much greater signs of maladjustment than women and young adults were much psychologically shattered and disturbed as a result of limb loss. Amputation caused by planned medical reasons leads to less adjustment issues as compared to unplanned accidental amputation in which patient were not mentally prepare to accept this loss. Conclusion: Psychological aspect of amputation is an important aspect of limb loss which needs to be addressed properly in order to rehabilitate these patients and helps them to adjust successfully to their limb loss. (author)

  19. Subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory for nonvolatile operation

    Science.gov (United States)

    Huh, In; Cheon, Woo Young; Choi, Woo Young

    2016-04-01

    A subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory (SAT RAM) has been proposed and fabricated for low-power nonvolatile memory applications. The proposed SAT RAM cell demonstrates adjustable subthreshold swing (SS) depending on stored information: small SS in the erase state ("1" state) and large SS in the program state ("0" state). Thus, SAT RAM cells can achieve low read voltage (Vread) with a large memory window in addition to the effective suppression of ambipolar behavior. These unique features of the SAT RAM are originated from the locally stored charge, which modulates the tunneling barrier width (Wtun) of the source-to-channel tunneling junction.

  20. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    Science.gov (United States)

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  1. Angle-adjustable density field formulation for the modeling of crystalline microstructure

    Science.gov (United States)

    Wang, Zi-Le; Liu, Zhirong; Huang, Zhi-Feng

    2018-05-01

    A continuum density field formulation with particle-scale resolution is constructed to simultaneously incorporate the orientation dependence of interparticle interactions and the rotational invariance of the system, a fundamental but challenging issue in modeling the structure and dynamics of a broad range of material systems across variable scales. This generalized phase field crystal-type approach is based upon the complete expansion of particle direct correlation functions and the concept of isotropic tensors. Through applications to the modeling of various two- and three-dimensional crystalline structures, our study demonstrates the capability of bond-angle control in this continuum field theory and its effects on the emergence of ordered phases, and provides a systematic way of performing tunable angle analyses for crystalline microstructures.

  2. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  3. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    Science.gov (United States)

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  4. Using multilevel modelling to assess case-mix adjusters in consumers experience surveys in health care

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  5. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    NARCIS (Netherlands)

    Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for

  6. Methodology of mixed load customized bus lines and adjustment based on time windows

    Science.gov (United States)

    Song, Rui

    2018-01-01

    Custom bus routes need to be optimized to meet the needs of a customized bus for personalized trips of different passengers. This paper introduced a customized bus routing problem in which trips for each depot are given, and each bus stop has a fixed time window within which trips should be completed. Treating a trip as a virtual stop was the first consideration in solving the school bus routing problem (SBRP). Then, the mixed load custom bus routing model was established with a time window that satisfies its requirement and the result were solved by Cplex software. Finally, a simple network diagram with three depots, four pickup stops, and five delivery stops was structured to verify the correctness of the model, and based on the actual example, the result is that all the buses ran 124.42 kilometers, the sum of kilometers was 10.35 kilometers less than before. The paths and departure times of the different busses that were provided by the model were evaluated to meet the needs of the given conditions, thus providing valuable information for actual work. PMID:29320505

  7. Constructing Quality Adjusted Price Indexes: a Comparison of Hedonic and Discrete Choice Models

    OpenAIRE

    N. Jonker

    2001-01-01

    The Boskin report (1996) concluded that the US consumer price index (CPI) overestimated the inflation by 1.1 percentage points. This was due to several measurement errors in the CPI. One of them is called quality change bias. In this paper two methods are compared which can be used to eliminate quality change bias, namely the hedonic method and a method based on the use of discrete choice models. The underlying micro-economic fundations of the two methods are compared as well as their empiric...

  8. Profiles of dyadic adjustment for advanced prostate cancer to inform couple-based intervention.

    Science.gov (United States)

    Elliott, Kate-Ellen J; Scott, Jennifer L; Monsour, Michael; Nuwayhid, Fadi

    2015-01-01

    The purpose of the study is to describe from a relational perspective, partners' psychological adjustment, coping and support needs for advanced prostate cancer. A mixed methods design was adopted, employing triangulation of qualitative and quantitative data, to produce dyadic profiles of adjustment for six couples recruited from the urology clinics of local hospitals in Tasmania, Australia. Dyads completed a video-taped communication task, semi-structured interview and standardised self-report questionnaires. Themes identified were associated with the dyadic challenges of the disease experience (e.g. relationship intimacy, disease progression and carer burden). Couples with poor psychological adjustment profiles had both clinical and global locus of distress, treatment side-effects, carer burden and poor general health. Resilient couples demonstrated relationship closeness and adaptive cognitive and behavioural coping strategies. The themes informed the adaption of an effective program for couples coping with women's cancers (CanCOPE, to create a program for couples facing advanced prostate cancer (ProCOPE-Adv). Mixed method results inform the development of psychological therapy components for couples coping with advanced prostate cancer. The concomitance of co-morbid health problems may have implications for access and engagement for older adult populations in face-to-face intervention.

  9. Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.

    Science.gov (United States)

    Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich

    2015-12-01

    Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in

  10. Risk-adjustment models for heart failure patients' 30-day mortality and readmission rates: the incremental value of clinical data abstracted from medical charts beyond hospital discharge record.

    Science.gov (United States)

    Lenzi, Jacopo; Avaldi, Vera Maria; Hernandez-Boussard, Tina; Descovich, Carlo; Castaldini, Ilaria; Urbinati, Stefano; Di Pasquale, Giuseppe; Rucci, Paola; Fantini, Maria Pia

    2016-09-06

    Hospital discharge records (HDRs) are routinely used to assess outcomes of care and to compare hospital performance for heart failure. The advantages of using clinical data from medical charts to improve risk-adjustment models remain controversial. The aim of the present study was to evaluate the additional contribution of clinical variables to HDR-based 30-day mortality and readmission models in patients with heart failure. This retrospective observational study included all patients residing in the Local Healthcare Authority of Bologna (about 1 million inhabitants) who were discharged in 2012 from one of three hospitals in the area with a diagnosis of heart failure. For each study outcome, we compared the discrimination of the two risk-adjustment models (i.e., HDR-only model and HDR-clinical model) through the area under the ROC curve (AUC). A total of 1145 and 1025 patients were included in the mortality and readmission analyses, respectively. Adding clinical data significantly improved the discrimination of the mortality model (AUC = 0.84 vs. 0.73, p < 0.001), but not the discrimination of the readmission model (AUC = 0.65 vs. 0.63, p = 0.08). We identified clinical variables that significantly improved the discrimination of the HDR-only model for 30-day mortality following heart failure. By contrast, clinical variables made little contribution to the discrimination of the HDR-only model for 30-day readmission.

  11. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  12. A typology of interpartner conflict and maternal parenting practices in high-risk families: examining spillover and compensatory models and implications for child adjustment.

    Science.gov (United States)

    Sturge-Apple, Melissa L; Davies, Patrick T; Cicchetti, Dante; Fittoria, Michael G

    2014-11-01

    The present study incorporates a person-based approach to identify spillover and compartmentalization patterns of interpartner conflict and maternal parenting practices in an ethnically diverse sample of 192 2-year-old children and their mothers who had experienced higher levels of socioeconomic risk. In addition, we tested whether sociocontextual variables were differentially predictive of theses profiles and examined how interpartner-parenting profiles were associated with children's physiological and psychological adjustment over time. As expected, latent class analyses extracted three primary profiles of functioning: adequate functioning, spillover, and compartmentalizing families. Furthermore, interpartner-parenting profiles were differentially associated with both sociocontextual predictors and children's adjustment trajectories. The findings highlight the developmental utility of incorporating person-based approaches to models of interpartner conflict and maternal parenting practices.

  13. Personal resilience, cognitive appraisals, and coping: an integrative model of adjustment to abortion.

    Science.gov (United States)

    Major, B; Richards, C; Cooper, M L; Cozzarelli, C; Zubek, J

    1998-03-01

    We hypothesized that the effects of personality (self-esteem, control, and optimism) on postabortion adaptation (distress, well-being, and decision satisfaction) would be fully mediated by preabortion cognitive appraisals (stress appraisals and self-efficacy appraisals) and postabortion coping. We further proposed that the effects of preabortion appraisals on adaptation would be fully mediated by postabortion coping. Results of a longitudinal study of 527 women who had first-trimester abortions supported our hypotheses. Women with more resilient personalities appraised their abortion as less stressful and had higher self-efficacy for coping with the abortion. More positive appraisals predicted greater acceptance/reframing coping and lesser avoidance/denial, venting, support seeking, and religious coping. Acceptance-reframing predicted better adjustment on all measures, whereas avoidance-denial and venting related to poorer adjustment on all measures. Greater support seeking was associated with reduced distress, and greater religious coping was associated with less decision satisfaction.

  14. Adjustment of serum HE4 to reduced glomerular filtration and its use in biomarker-based prediction of deep myometrial invasion in endometrial cancer

    DEFF Research Database (Denmark)

    Chovanec, Josef; Selingerova, Iveta; Greplova, Kristina

    2017-01-01

    based on single-institution data from 120 EC patients and validated against multicentric data from 379 EC patients. Results: In non-cancer individuals, serum HE4 levels increase log-linearly with reduced glomerular filtration of eGFR = 90 ml/min/1.73 m2. HE4ren, adjusting HE4 serum levels to decreased e...... levels to reduced eGFR that enables quantification of time-dependent changes in HE4 production and elimination irrespective of age and renal function in women. Utilizing HE4ren improves performance of biomarker-based models for prediction of dMI in endometrial cancer patients....

  15. A comparative evaluation of risk-adjustment models for benchmarking amputation-free survival after lower extremity bypass.

    Science.gov (United States)

    Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres

    2016-04-01

    Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model

  16. An ice flow modeling perspective on bedrock adjustment patterns of the Greenland ice sheet

    Directory of Open Access Journals (Sweden)

    M. Olaizola

    2012-11-01

    Full Text Available Since the launch in 2002 of the Gravity Recovery and Climate Experiment (GRACE satellites, several estimates of the mass balance of the Greenland ice sheet (GrIS have been produced. To obtain ice mass changes, the GRACE data need to be corrected for the effect of deformation changes of the Earth's crust. Recently, a new method has been proposed where ice mass changes and bedrock changes are simultaneously solved. Results show bedrock subsidence over almost the entirety of Greenland in combination with ice mass loss which is only half of the currently standing estimates. This subsidence can be an elastic response, but it may however also be a delayed response to past changes. In this study we test whether these subsidence patterns are consistent with ice dynamical modeling results. We use a 3-D ice sheet–bedrock model with a surface mass balance forcing based on a mass balance gradient approach to study the pattern and magnitude of bedrock changes in Greenland. Different mass balance forcings are used. Simulations since the Last Glacial Maximum yield a bedrock delay with respect to the mass balance forcing of nearly 3000 yr and an average uplift at present of 0.3 mm yr−1. The spatial pattern of bedrock changes shows a small central subsidence as well as more intense uplift in the south. These results are not compatible with the gravity based reconstructions showing a subsidence with a maximum in central Greenland, thereby questioning whether the claim of halving of the ice mass change is justified.

  17. Mistral project: identification and parameter adjustment. Theoretical part; Projet Mistral: identification et recalage des modeles. Etude theorique

    Energy Technology Data Exchange (ETDEWEB)

    Faille, D.; Codrons, B.; Gevers, M.

    1996-03-01

    This document belongs to the methodological part of the project MISTRAL, which builds a library of power plant models. The model equations are generally obtained from the first principles. The parameters are actually not always easily calculable (at least accurately) from the dimension data. We are therefore investigating the possibility of automatically adjusting the value of those parameters from experimental data. To do that, we must master the optimization algorithms and the techniques that are analyzing the model structure, like the identifiability theory. (authors). 7 refs., 1 fig., 1 append.

  18. Resolution of a Rank-Deficient Adjustment Model Via an Isomorphic Geometrical Setup with Tensor Structure.

    Science.gov (United States)

    1987-03-01

    holds an economical advantage. We now formulate 1. from (65) in conjunction with (57): IH]T T I TI (4 L [I R)T(I + RR ) (L*4) where the positive...sizes of the matrices to be inverted, an economical edge of the analytical formulation hecomes apparent as well. Chapter 3 contains one such matrix of...the relat ions Sthat can tie iibta i ned via the tensor vers ion of adjustment quant it ies. Alt Iihut t . ho’ ) mpu it iona I merits of the CholeskI

  19. The adjustment of global and partial dry biomass models for Pinus pinaster in the North-East of Portugal

    OpenAIRE

    Lopes, Domingos; Almeida, L.R.; Castro, João Paulo; Aranha, José

    2005-01-01

    Ecosystems net primary production quantification can be done by means of allometric equations. Carbon sequestration studies also involve the quantification of growth dry biomass, knowing the carbon percentage of dry biomass. Fieldwork complexity to collect these kind of data are often limitative for obtaining these mathematical models. Allometric equations were adjusted to estimate dry biomass of individual Pinus pinaster trees, using data from 30 trees. Statisticals form the final equatio...

  20. Bicultural identity, bilingualism, and psychological adjustment in multicultural societies: immigration-based and globalization-based acculturation.

    Science.gov (United States)

    Chen, Sylvia Xiaohua; Benet-Martínez, Verónica; Harris Bond, Michael

    2008-07-01

    The present investigation examined the impact of bicultural identity, bilingualism, and social context on the psychological adjustment of multicultural individuals. Our studies targeted three distinct types of biculturals: Mainland Chinese immigrants in Hong Kong, Filipino domestic workers (i.e., sojourners) in Hong Kong, and Hong Kong and Mainland Chinese college students. Individual differences in Bicultural Identity Integration (BII; Benet-Martínez, Leu, Lee, & Morris, 2002) positively predicted psychological adjustment for all the samples except sojourners even after controlling for the personality traits of neuroticism and self-efficacy. Cultural identification and language abilities also predicted adjustment, although these associations varied across the samples in meaningful ways. We concluded that, in the process of managing multiple cultural environments and group loyalties, bilingual competence, and perceiving one's two cultural identities as integrated are important antecedents of beneficial psychological outcomes.

  1. Siblings' Perceptions of Differential Treatment, Fairness, and Jealousy and Adolescent Adjustment: A Moderated Indirect Effects Model.

    Science.gov (United States)

    Loeser, Meghan K; Whiteman, Shawn D; McHale, Susan M

    2016-08-01

    Youth's perception of parents' differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings ( M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (-1 SD ) and average levels of fairness. At high levels of fairness (+1 SD ) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment.

  2. Siblings’ Perceptions of Differential Treatment, Fairness, and Jealousy and Adolescent Adjustment: A Moderated Indirect Effects Model

    Science.gov (United States)

    Loeser, Meghan K.; Whiteman, Shawn D.; McHale, Susan M.

    2016-01-01

    Youth's perception of parents’ differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings (M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (−1 SD) and average levels of fairness. At high levels of fairness (+1 SD) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment. PMID:27867295

  3. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    Science.gov (United States)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  4. Adjustment and Prediction of Machine Factors Based on Neural Artificial Intelligence

    International Nuclear Information System (INIS)

    Hussein, A.Z.; Amin, E.S.; Ibrahim, M.S.

    2009-01-01

    Since the discovery of x-ray, it is use in examination has become an integral part of medical diagnostic radiology. The use of X-ray is harmful to human beings but recent technological advances and regulatory constraints have made the medical Xray much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict x-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.

  5. Adjustment and Prediction of X-Ray Machine Factors Based on Neural Artificial Inculcating

    International Nuclear Information System (INIS)

    Hussein, A.Z.; Amin, E.S.; Ibrahim, M.S.

    2009-01-01

    Since the discovery of X-rays, their use in examination has become an integral part of medical diagnostic radiology. The use of X-rays is harmful to human beings but recent technological advances and regulatory constraints have made the medical X-rays much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict X-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.

  6. Risk adjustment model of credit life insurance using a genetic algorithm

    Science.gov (United States)

    Saputra, A.; Sukono; Rusyaman, E.

    2018-03-01

    In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.

  7. Real time adjustment of slow changing flow components in distributed urban runoff models

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, M.; Mikkelsen, Peter Steen

    2011-01-01

    In many urban runoff systems infiltrating water contributes with a substantial part of the total inflow and therefore most urban runoff modelling packages include hydrological models for simulating the infiltrating inflow. This paper presents a method for deterministic updating of the hydrological...... improvements for regular simulations as well as up to 10 hour forecasts. The updating method reduces the impact of non-representative precipitation estimates as well as model structural errors and leads to better overall modelling results....

  8. A non-Gaussian generalisation of the Airline model for robust Seasonal Adjustment

    NARCIS (Netherlands)

    Aston, J.; Koopman, S.J.

    2006-01-01

    In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a

  9. THE INSTANTANEOUS SPEED OF ADJUSTMENT ASSUMPTION AND STABILITY OF ECONOMIC-MODELS

    NARCIS (Netherlands)

    SCHOONBEEK, L

    In order to simplify stability analysis of an economic model one can assume that one of the model variables moves infinitely fast towards equilibrium, given the values of the other slower variables. We present conditions such that stability of the simplified model implies, or is implied by,

  10. Risk-adjusted econometric model to estimate postoperative costs: an additional instrument for monitoring performance after major lung resection.

    Science.gov (United States)

    Brunelli, Alessandro; Salati, Michele; Refai, Majed; Xiumé, Francesco; Rocco, Gaetano; Sabbatini, Armando

    2007-09-01

    The objectives of this study were to develop a risk-adjusted model to estimate individual postoperative costs after major lung resection and to use it for internal economic audit. Variable and fixed hospital costs were collected for 679 consecutive patients who underwent major lung resection from January 2000 through October 2006 at our unit. Several preoperative variables were used to develop a risk-adjusted econometric model from all patients operated on during the period 2000 through 2003 by a stepwise multiple regression analysis (validated by bootstrap). The model was then used to estimate the postoperative costs in the patients operated on during the 3 subsequent periods (years 2004, 2005, and 2006). Observed and predicted costs were then compared within each period by the Wilcoxon signed rank test. Multiple regression and bootstrap analysis yielded the following model predicting postoperative cost: 11,078 + 1340.3X (age > 70 years) + 1927.8X cardiac comorbidity - 95X ppoFEV1%. No differences between predicted and observed costs were noted in the first 2 periods analyzed (year 2004, $6188.40 vs $6241.40, P = .3; year 2005, $6308.60 vs $6483.60, P = .4), whereas in the most recent period (2006) observed costs were significantly lower than the predicted ones ($3457.30 vs $6162.70, P model may be used as a methodologic template for economic audit in our specialty and complement more traditional outcome measures in the assessment of performance.

  11. The self-adjusting file (SAF) system: An evidence-based update

    Science.gov (United States)

    Metzger, Zvi

    2014-01-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics. PMID:25298639

  12. Control Strategy for Vehicle Inductive Wireless Charging Based on Load Adaptive and Frequency Adjustment

    Directory of Open Access Journals (Sweden)

    Shichun Yang

    2018-05-01

    Full Text Available Wireless charging system for electric vehicles is a hot research issue in the world today. Since the existing research on wireless charging is mostly forward-looking aimed at low-power appliances like household appliances, while electric vehicles need a high-power, high-efficiency, and strong coupling wireless charging system. In this paper, we have specifically designed a 6.6 KW wireless charging system for electric vehicles and have proposed a control strategy suitable for electric vehicles according to its power charging characteristics and existing common wired charging protocol. Firstly, the influence of the equivalent load and frequency bifurcation on a wireless charging system is analyzed in this paper. Secondly, an adaptive load control strategy matching the characteristics of the battery, and the charging pile is put forward to meet the constant current and constant voltage charging requirements to improve the system efficiency. In addition, the frequency adjustment control strategy is designed to realize the real-time dynamic optimization of the entire system. It utilizes the improved methods of rapid judgment, variable step length matching and frequency splitting recognition, which are not adopted in early related researches. Finally, the results of 6.6 kW test show that the control strategy works perfectly since system response time can be reduced to less than 1 s, and the overall efficiency of the wireless charging system and the grid power supply module can reach up to 91%.

  13. The self-adjusting file (SAF) system: An evidence-based update.

    Science.gov (United States)

    Metzger, Zvi

    2014-09-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics.

  14. [Motion control of moving mirror based on fixed-mirror adjustment in FTIR spectrometer].

    Science.gov (United States)

    Li, Zhong-bing; Xu, Xian-ze; Le, Yi; Xu, Feng-qiu; Li, Jun-wei

    2012-08-01

    The performance of the uniform motion of the moving mirror, which is the only constant motion part in FTIR spectrometer, and the performance of the alignment of the fixed mirror play a key role in FTIR spectrometer, and affect the interference effect and the quality of the spectrogram and may restrict the precision and resolution of the instrument directly. The present article focuses on the research on the uniform motion of the moving mirror and the alignment of the fixed mirror. In order to improve the FTIR spectrometer, the maglev support system was designed for the moving mirror and the phase detection technology was adopted to adjust the tilt angle between the moving mirror and the fixed mirror. This paper also introduces an improved fuzzy PID control algorithm to get the accurate speed of the moving mirror and realize the control strategy from both hardware design and algorithm. The results show that the development of the moving mirror motion control system gets sufficient accuracy and real-time, which can ensure the uniform motion of the moving mirror and the alignment of the fixed mirror.

  15. Adjustable, short focal length permanent-magnet quadrupole based electron beam final focus system

    Directory of Open Access Journals (Sweden)

    J. K. Lim

    2005-07-01

    Full Text Available Advanced high-brightness beam applications such as inverse-Compton scattering (ICS depend on achieving of ultrasmall spot sizes in high current beams. Modern injectors and compressors enable the production of high-brightness beams having needed short bunch lengths and small emittances. Along with these beam properties comes the need to produce tighter foci, using stronger, shorter focal length optics. An approach to creating such strong focusing systems using high-field, small-bore permanent-magnet quadrupoles (PMQs is reported here. A final-focus system employing three PMQs, each composed of 16 neodymium iron boride sectors in a Halbach geometry has been installed in the PLEIADES ICS experiment. The field gradient in these PMQs is 560   T/m, the highest ever reported in a magnetic optics system. As the magnets are of a fixed field strength, the focusing system is tuned by adjusting the position of the three magnets along the beam line axis, in analogy to familiar camera optics. This paper discusses the details of the focusing system, simulation, design, fabrication, and experimental procedure in creating ultrasmall beams at PLEIADES.

  16. Estimating the potential intensification of global grazing systems based on climate adjusted yield gap analysis

    Science.gov (United States)

    Sheehan, J. J.

    2016-12-01

    We report here a first-of-its-kind analysis of the potential for intensification of global grazing systems. Intensification is calculated using the statistical yield gap methodology developed previously by others (Mueller et al 2012 and Licker et al 2010) for global crop systems. Yield gaps are estimated by binning global pasture land area into 100 equal area sized bins of similar climate (defined by ranges of rainfall and growing degree days). Within each bin, grid cells of pastureland are ranked from lowest to highest productivity. The global intensification potential is defined as the sum of global production across all bins at a given percentile ranking (e.g. performance at the 90th percentile) divided by the total current global production. The previous yield gap studies focused on crop systems because productivity data on these systems is readily available. Nevertheless, global crop land represents only one-third of total global agricultural land, while pasture systems account for the remaining two-thirds. Thus, it is critical to conduct the same kind of analysis on what is the largest human use of land on the planet—pasture systems. In 2013, Herrero et al announced the completion of a geospatial data set that augmented the animal census data with data and modeling about production systems and overall food productivity (Herrero et al, PNAS 2013). With this data set, it is now possible to apply yield gap analysis to global pasture systems. We used the Herrero et al data set to evaluate yield gaps for meat and milk production from pasture based systems for cattle, sheep and goats. The figure included with this abstract shows the intensification potential for kcal per hectare per year of meat and milk from global cattle, sheep and goats as a function of increasing levels of performance. Performance is measured as the productivity achieved at a given ranked percentile within each bin.We find that if all pasture land were raised to their 90th percentile of

  17. Adjustment of Serum HE4 to reduced Glomerular filtration and its use in Biomarker-based prediction of deep Myometrial invasion in endometrial cancer

    DEFF Research Database (Denmark)

    Chovanec, Josef; Selingerova, Iveta; Greplova, Kristina

    2017-01-01

    Background: We investigated the efficacy of circulating biomarkers together with histological grade and age to predict deep myometrial invasion (dMI) in endometrial cancer patients. Methods: HE4ren was developed adjusting HE4 serum levels towards decreased glomerular filtration rate as quantified...... levels to reduced eGFR that enables quantification of time-dependent changes in HE4 production and elimination irrespective of age and renal function in women. Utilizing HE4ren improves performance of biomarker-based models for prediction of dMI in endometrial cancer patients....

  18. Adjust of the residuals of the Arima model by means of the analysis of the residuals of the explanatory variables by means of the analysis of main components

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Montealegre Bocanegra, Jose Edgar

    2000-01-01

    Based on the previous knowledge and understanding of the causality relationships between the fields of surface temperature of the Pacific and North and South tropical Atlantic oceans and rainfall behaviour in Colombia, we purport to model those relations with a (statistical) transfer model. This work is aimed at improving the adjustment of the model for the monthly mean rainfall registered in Funza (nearby the Capital Bogota). The residues of ARIMA models with six explanatory variables may contribute some percentage to the explanation of the total variability of rainfall as a consequence of their interrelationship. Such effect can be represented as a summary of the six variables, which can be achieved with principal components, taking into account that they are not mutually dependent, since they are white noise time series

  19. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    National Research Council Canada - National Science Library

    Lee, John

    2000-01-01

    .... This paper briefly explores project management principles, leadership theory, and commercial business practices, suggesting improvements to the Army's modeling and simulation development process...

  20. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  1. LC Filter Design for Wide Band Gap Device Based Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede

    2014-01-01

    the LC filter with a higher cut off frequency and without damping resistors. The selection of inductance and capacitance is chosen based on capacitor voltage ripple and current ripple. The filter adds a base load to the inverter, which increases the inverter losses. It is shown how the modulation index...

  2. Demography-adjusted tests of neutrality based on genome-wide SNP data

    KAUST Repository

    Rafajlović, Marina; Klassmann, Alexander; Eriksson, Anders; Wiehe, Thomas H E; Mehlig, Bernhard

    2014-01-01

    Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption

  3. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases

    Directory of Open Access Journals (Sweden)

    Flor-Serra Ferran

    2009-06-01

    Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression

  4. Models of quality-adjusted life years when health varies over time

    DEFF Research Database (Denmark)

    Hansen, Kristian Schultz; Østerdal, Lars Peter Raahave

    2006-01-01

    Qualityadjusted life year (QALY) models are widely used for economic evaluation in the health care sector. In the first part of the paper, we establish an overview of QALY models where health varies over time and provide a theoretical analysis of model identification and parameter estimation from...... time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken when drawing conclusions from models with chronic health states to situations where health varies over time. One notable difference is that in the former case, risk aversion may be indistinguishable from...

  5. TH-CD-209-06: LET-Based Adjustment of IMPT Plans Using Prioritized Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, J; Giantsoudi, D; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Botas, P [Massachusetts General Hospital, Boston, MA (United States); Heidelberg University, Heidelberg, DE (Germany); Qin, N; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: In-vitro experiments suggest an increase in proton relative biological effectiveness (RBE) towards the end of range. However, proton treatment planning and dose reporting for clinical outcome assessment has been based on physical dose and constant RBE. Therefore, treatment planning for intensity-modulated proton therapy (IMPT) is unlikely to transition radically to pure RBE-based planning. We suggest a hybrid approach where treatment plans are initially created based on physical dose constraints and prescriptions, and are subsequently altered to avoid high linear energy transfer (LET) in critical structures while limiting the degradation of the physical dose distribution. Methods: To allow fast optimization based on dose and LET we extended a GPU-based Monte-Carlo code towards providing dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dose objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of physical dose and LET (LETxD). To first approximation, LETxD represents a measure of the additional biological dose that is caused by high LET. Regarding optimization techniques, LETxD has the advantage of being a linear function of the pencil beam intensities. Results: The method is applicable to treatments where serial critical structures with maximum dose constraint are located in or near the target. We studied intra-cranial tumors (high-grade meningiomas, base-of-skull chordomas) where the target (CTV) overlaps with the brainstem and optic structures. Often, high LETxD in critical structures can be avoided while minimally compromising physical dose planning objectives. Conclusion: LET-based re-optimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose-based

  6. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    Science.gov (United States)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  7. CENTRAL WAVELENGTH ADJUSTMENT OF LIGHT EMITTING SOURCE IN INTERFEROMETRIC SENSORS BASED ON FIBER-OPTIC BRAGG GRATINGS

    Directory of Open Access Journals (Sweden)

    A. S. Aleynik

    2015-09-01

    Full Text Available The paper is focused on the investigation of fiber-optic interferometric sensor based on the array of fiber Bragg gratings. Reflection spectra displacement mechanism of the fiber Bragg gratings under the external temperature effects and the static pressure is described. The experiment has shown that reflection spectra displacement of Bragg gratings reduces the visibility of the interference pattern. A method of center wavelength adjustment is proposed for the optical radiation source in accord ance with the current Bragg gratings reflection spectra based on the impulse relative modulation of control signal for the Peltier element controller. The semiconductor vertical-cavity surface-emitting laser controlled by a pump driver is used as a light source. The method is implemented by the Peltier element controller regulating and stabilizing the light source temperature, and a programmable logic-integrated circuit monitoring the Peltier element controller. The experiment has proved that the proposed method rendered possible to regulate the light source temperature at a pitch of 0.05 K and adjust the optical radiation source center wavelength at a pitch of 0.05 nm. Experimental results have revealed that the central wavelength of the radiation adjustment at a pitch of 0.005 nm gives the possibility for the capacity of the array consisting of four opticalfiber sensors based on the fiber Bragg gratings. They are formed in one optical fiber under the Bragg grating temperature change from 0° C to 300° C and by the optical fiber mechanical stretching by the force up to 2 N.

  8. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  9. Elasto-plastic hardening models adjustment to ferritic, austenitic and austenoferritic Rebar

    International Nuclear Information System (INIS)

    Hortigóna, B.; Gallardo, J.M.; Nieto-García, E.J.; López, J.A.

    2017-01-01

    The elastoplastic behaviour of steel used for structural member fabrication has received attention to facilitate a mechanical-resistant design. New Zealand and South African standards have adopted various theoretical approaches to describe such behaviour in stainless steels. With respect to the building industry, describing the tensile behaviour of steel rebar used to produce reinforced concrete structures is of interest. Differences compared with the homogenous material described in the above mentioned standards and related literatures are discussed in this paper. Specifically, the presence of ribs and the TEMPCORE® technology used to produce carbon steel rebar may alter the elastoplastic model. Carbon steel rebar is shown to fit a Hollomon model giving hardening exponent values on the order of 0.17. Austenitic stainless steel rebar behaviour is better described using a modified Rasmussen model with a free fitted exponent of 6. Duplex stainless steel shows a poor fit to any previous model. [es

  10. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  11. Users Guide to SAMINT: A Code for Nuclear Data Adjustment with SAMMY Based on Integral Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sobes, Vladimir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arbanas, Goran [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-10-01

    The purpose of this project is to couple differential and integral data evaluation in a continuous-energy framework. More specifically, the goal is to use the Generalized Linear Least Squares methodology employed in TSURFER to update the parameters of a resolved resonance region evaluation directly. Recognizing that the GLLS methodology in TSURFER is identical to the mathematical description of the simple Bayesian updating carried out in SAMMY, the computer code SAMINT was created to help use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Minimal modifications of SAMMY are required when used with SAMINT to make resonance parameter updates based on integral experimental data.

  12. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  13. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  14. Evaluating the Investment Benefit of Multinational Enterprises' International Projects Based on Risk Adjustment: Evidence from China

    Science.gov (United States)

    Chen, Chong

    2016-01-01

    This study examines the international risks faced by multinational enterprises to understand their impact on the evaluation of investment projects. Moreover, it establishes a 'three-dimensional' theoretical framework of risk identification to analyse the composition of international risk indicators of multinational enterprises based on the theory…

  15. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    OpenAIRE

    Doo Yong Choi; Seong-Won Kim; Min-Ah Choi; Zong Woo Geem

    2016-01-01

    Rapid detection of bursts and leaks in water distribution systems (WDSs) can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA) systems and the establishment of district meter areas (DMAs). Nonetheless, no consideration has been given to how frequen...

  16. Adjustment and Characterization of an Original Model of Chronic Ischemic Heart Failure in Pig

    Directory of Open Access Journals (Sweden)

    Laurent Barandon

    2010-01-01

    Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.

  17. Psychosocial adjustment of children with chronic illness: an evaluation of three models.

    Science.gov (United States)

    Gartstein, M A; Short, A D; Vannatta, K; Noll, R B

    1999-06-01

    This study was designed to assess social, emotional, and behavioral functioning of children with chronic illness and to evaluate three models addressing the impact of chronic illness on psychosocial functioning: discrete disease, noncategorical, and mixed. Families of children with cancer, sickle cell disease, hemophilia, and juvenile rheumatoid arthritis participated, along with families of classroom comparison peers without a chronic illness who had the closest date of birth and were of the same race and gender (COMPs). Mothers, fathers, and children provided information regarding current functioning of the child with chronic illness or the COMP child. Child Behavior Checklist and Children's Depression Inventory scores were examined. Results provided support for the noncategorical model. Thus, the mixed model evaluated in this study requires modifications before its effectiveness as a classification system can be demonstrated.

  18. The Linear Quadratic Adjustment Cost Model and the Demand for Labour

    DEFF Research Database (Denmark)

    Engsted, Tom; Haldrup, Niels

    1994-01-01

    Der udvikles en ny metode til estimation og test af den lineære kvadratiske tilpasningsomkostningsmodel når de underliggende tidsserier er ikke-stationære, og metoden anvendes til modellering af arbejdskraftefterspørgslen i danske industrisektorer.......Der udvikles en ny metode til estimation og test af den lineære kvadratiske tilpasningsomkostningsmodel når de underliggende tidsserier er ikke-stationære, og metoden anvendes til modellering af arbejdskraftefterspørgslen i danske industrisektorer....

  19. Repatriation Adjustment: Literature Review

    Directory of Open Access Journals (Sweden)

    Gamze Arman

    2009-12-01

    Full Text Available Expatriation is a widely studied area of research in work and organizational psychology. After expatriates accomplish their missions in host countries, they return to their countries and this process is called repatriation. Adjustment constitutes a crucial part in repatriation research. In the present literature review, research about repatriation adjustment was reviewed with the aim of defining the whole picture in this phenomenon. Present research was classified on the basis of a theoretical model of repatriation adjustment. Basic frame consisted of antecedents, adjustment, outcomes as main variables and personal characteristics/coping strategies and organizational strategies as moderating variables.

  20. Case mix adjustment of health outcomes, resource use and process indicators in childbirth care: a register-based study.

    Science.gov (United States)

    Mesterton, Johan; Lindgren, Peter; Ekenberg Abreu, Anna; Ladfors, Lars; Lilja, Monica; Saltvedt, Sissel; Amer-Wåhlin, Isis

    2016-05-31

    Unwarranted variation in care practice and outcomes has gained attention and inter-hospital comparisons are increasingly being used to highlight and understand differences between hospitals. Adjustment for case mix is a prerequisite for meaningful comparisons between hospitals with different patient populations. The objective of this study was to identify and quantify maternal characteristics that impact a set of important indicators of health outcomes, resource use and care process and which could be used for case mix adjustment of comparisons between hospitals. In this register-based study, 139 756 deliveries in 2011 and 2012 were identified in regional administrative systems from seven Swedish regions, which together cover 67 % of all deliveries in Sweden. Data were linked to the Medical birth register and Statistics Sweden's population data. A number of important indicators in childbirth care were studied: Caesarean section (CS), induction of labour, length of stay, perineal tears, haemorrhage > 1000 ml and post-partum infections. Sociodemographic and clinical characteristics deemed relevant for case mix adjustment of outcomes and resource use were identified based on previous literature and based on clinical expertise. Adjustment using logistic and ordinary least squares regression analysis was performed to quantify the impact of these characteristics on the studied indicators. Almost all case mix factors analysed had an impact on CS rate, induction rate and length of stay and the effect was highly statistically significant for most factors. Maternal age, parity, fetal presentation and multiple birth were strong predictors of all these indicators but a number of additional factors such as born outside the EU, body mass index (BMI) and several complications during pregnancy were also important risk factors. A number of maternal characteristics had a noticeable impact on risk of perineal tears, while the impact of case mix factors was less pronounced for

  1. A Unified Model Exploring Parenting Practices as Mediators of Marital Conflict and Children's Adjustment

    Science.gov (United States)

    Coln, Kristen L.; Jordan, Sara S.; Mercer, Sterett H.

    2013-01-01

    We examined positive and negative parenting practices and psychological control as mediators of the relations between constructive and destructive marital conflict and children's internalizing and externalizing problems in a unified model. Married mothers of 121 children between the ages of 6 and 12 completed questionnaires measuring marital…

  2. FEM-based Printhead Intelligent Adjusting Method for Printing Conduct Material

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available Ink-jet printing circuit board has some advantage, such as non-contact manufacture, high manufacture accuracy, and low pollution and so on. In order to improve the and printing precision, the finite element technology is adopted to model the piezoelectric print heads, and a new bacteria foraging algorithm with a lifecycle strategy is proposed to optimize the parameters of driving waveforms for getting the desired droplet characteristics. Results of numerical simulation show such algorithm has a good performance. Additionally, the droplet jetting simulation results and measured results confirmed such method precisely gets the desired droplet characteristics.

  3. Self-Tuning Insulin Adjustment Algorithm for Type 1 Diabetic Patients based on Multi-Doses Regime

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2005-01-01

    Full Text Available A self-tuning algorithm is presented for on-line insulin dosage adjustment in type 1 diabetic patients (chronic stage. The algorithm suggested does not need information of the patient insulin–glucose dynamics (model-free. Three doses are programmed daily, where a combination of two types of insulin: rapid/short and intermediate/long acting is injected into the patient through a subcutaneous route. The doses adaptation is performed by reducing the error in the blood glucose level from euglycemics. In this way, a total of five doses are tuned per day: three rapid/short and two intermediate/long, where there is large penalty to avoid hypoglycemic scenarios. Closed-loop simulation results are illustrated using a detailed nonlinear model of the subcutaneous insulin–glucose dynamics in a type 1 diabetic patient with meal intake.

  4. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  5. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Directory of Open Access Journals (Sweden)

    Kwang Cheol Shin

    2009-02-01

    Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  6. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers.

    Science.gov (United States)

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  7. Aggregate Demand–Inflation Adjustment Model Applied to Southeast European Economies

    Directory of Open Access Journals (Sweden)

    Apostolov Mico

    2016-01-01

    Full Text Available Applying IS-MP-IA model and the Taylor rule to selected Southeast European economies (Albania, Bosnia and Herzegovina, Macedonia and Serbia we find that the change of effective exchange rate positively affects output, while the change of the world interest rate negatively affects output or it does not affect the output at all, and additional world output would help to increase output of the selected economies.

  8. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  9. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  10. Risk based modelling

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Baker, A.E.

    1993-01-01

    Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)

  11. The Effect of Overconfidence and Experience on Belief Adjustment Model in Investment Judgement

    Directory of Open Access Journals (Sweden)

    Luciana Spica Almilia

    2016-04-01

    Full Text Available This study examines the effect overconfidence and experience on increasing or reducing the information order effect in investment decision making. Subject criteria in this research are: professional investor (who having knowledge and experience in the field of investment and stock market and nonprofessional investor (who having knowledge in the field of investment and stock market. Based on the subject criteria, then subjects in this research include: accounting students, capital market and investor. This research is using experimental method of 2 x 2 (between subjects. The researcher in conducting this experimental research is using web based. The characteristic of individual (high confidence and low confidence is measured by calibration test. Independent variable used in this research consist of 2 active independent variables (manipulated which are as the followings: (1 Pattern of information presentation (step by step and end of sequence; and (2 Presentation order (good news – bad news or bad news – good news. Dependent variable in this research is a revision of investment decision done by research subject. Participants in this study were 78 nonprofessional investor and 48 professional investors. The research result is consistent with that predicted that individuals who have a high level of confidence that will tend to ignore the information available, the impact on individuals with a high level of confidence will be spared from the effects of the information sequence.

  12. Modifying the baricity of local anesthetics for spinal anesthesia by temperature adjustment: model calculations.

    Science.gov (United States)

    Heller, Axel R; Zimmermann, Katrin; Seele, Kristin; Rössel, Thomas; Koch, Thea; Litz, Rainer J

    2006-08-01

    Although local anesthetics (LAs) are hyperbaric at room temperature, density drops within minutes after administration into the subarachnoid space. LAs become hypobaric and therefore may cranially ascend during spinal anesthesia in an uncontrolled manner. The authors hypothesized that temperature and density of LA solutions have a nonlinear relation that may be described by a polynomial equation, and that conversion of this equation may provide the temperature at which individual LAs are isobaric. Density of cerebrospinal fluid was measured using a vibrating tube densitometer. Temperature-dependent density data were obtained from all LAs commonly used for spinal anesthesia, at least in triplicate at 5 degrees, 20 degrees, 30 degrees, and 37 degrees C. The hypothesis was tested by fitting the obtained data into polynomial mathematical models allowing calculations of substance-specific isobaric temperatures. Cerebrospinal fluid at 37 degrees C had a density of 1.000646 +/- 0.000086 g/ml. Three groups of local anesthetics with similar temperature (T, degrees C)-dependent density (rho) characteristics were identified: articaine and mepivacaine, rho1(T) = 1.008-5.36 E-06 T2 (heavy LAs, isobaric at body temperature); L-bupivacaine, rho2(T) = 1.007-5.46 E-06 T2 (intermediate LA, less hypobaric than saline); bupivacaine, ropivacaine, prilocaine, and lidocaine, rho3(T) = 1.0063-5.0 E-06 T (light LAs, more hypobaric than saline). Isobaric temperatures (degrees C) were as follows: 5 mg/ml bupivacaine, 35.1; 5 mg/ml L-bupivacaine, 37.0; 5 mg/ml ropivacaine, 35.1; 20 mg/ml articaine, 39.4. Sophisticated measurements and mathematic models now allow calculation of the ideal injection temperature of LAs and, thus, even better control of LA distribution within the cerebrospinal fluid. The given formulae allow the adaptation on subpopulations with varying cerebrospinal fluid density.

  13. Model-based consensus

    NARCIS (Netherlands)

    Boumans, M.; Martini, C.; Boumans, M.

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  14. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  15. Fast Cloud Adjustment to Increasing CO2 in a Superparameterized Climate Model

    Directory of Open Access Journals (Sweden)

    Marat Khairoutdinov

    2012-05-01

    Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.

  16. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Science.gov (United States)

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  17. CO-REGISTRATION AIRBORNE LIDAR POINT CLOUD DATA AND SYNCHRONOUS DIGITAL IMAGE REGISTRATION BASED ON COMBINED ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    Z. H. Yang

    2016-06-01

    Full Text Available Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels.

  18. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Directory of Open Access Journals (Sweden)

    Casoli Pierre

    2016-01-01

    Full Text Available CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  19. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566

  20. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566

  1. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  2. Adjustable collimator

    International Nuclear Information System (INIS)

    Carlson, R.W.; Covic, J.; Leininger, G.

    1981-01-01

    In a rotating fan beam tomographic scanner there is included an adjustable collimator and shutter assembly. The assembly includes a fan angle collimation cylinder having a plurality of different length slots through which the beam may pass for adjusting the fan angle of the beam. It also includes a beam thickness cylinder having a plurality of slots of different widths for adjusting the thickness of the beam. Further, some of the slots have filter materials mounted therein so that the operator may select from a plurality of filters. Also disclosed is a servo motor system which allows the operator to select the desired fan angle, beam thickness and filter from a remote location. An additional feature is a failsafe shutter assembly which includes a spring biased shutter cylinder mounted in the collimation cylinders. The servo motor control circuit checks several system conditions before the shutter is rendered openable. Further, the circuit cuts off the radiation if the shutter fails to open or close properly. A still further feature is a reference radiation intensity monitor which includes a tuning-fork shaped light conducting element having a scintillation crystal mounted on each tine. The monitor is placed adjacent the collimator between it and the source with the pair of crystals to either side of the fan beam

  3. Estimates of over-diagnosis of breast cancer due to population-based mammography screening in South Australia after adjustment for lead time effects.

    Science.gov (United States)

    Beckmann, Kerri; Duffy, Stephen W; Lynch, John; Hiller, Janet; Farshid, Gelareh; Roder, David

    2015-09-01

    To estimate over-diagnosis due to population-based mammography screening using a lead time adjustment approach, with lead time measures based on symptomatic cancers only. Women aged 40-84 in 1989-2009 in South Australia eligible for mammography screening. Numbers of observed and expected breast cancer cases were compared, after adjustment for lead time. Lead time effects were modelled using age-specific estimates of lead time (derived from interval cancer rates and predicted background incidence, using maximum likelihood methods) and screening sensitivity, projected background breast cancer incidence rates (in the absence of screening), and proportions screened, by age and calendar year. Lead time estimates were 12, 26, 43 and 53 months, for women aged 40-49, 50-59, 60-69 and 70-79 respectively. Background incidence rates were estimated to have increased by 0.9% and 1.2% per year for invasive and all breast cancer. Over-diagnosis among women aged 40-84 was estimated at 7.9% (0.1-12.0%) for invasive cases and 12.0% (5.7-15.4%) when including ductal carcinoma in-situ (DCIS). We estimated 8% over-diagnosis for invasive breast cancer and 12% inclusive of DCIS cancers due to mammography screening among women aged 40-84. These estimates may overstate the extent of over-diagnosis if the increasing prevalence of breast cancer risk factors has led to higher background incidence than projected. © The Author(s) 2015.

  4. A structural model for stress, coping, and psychosocial adjustment: A multi-group analysis by stages of survivorship in Korean women with breast cancer.

    Science.gov (United States)

    Jang, Miyoung; Kim, Jiyoung

    2018-04-01

    Prospective studies have examined factors directly affecting psychosocial adjustment during breast cancer treatment. Survivorship stage may moderate a direct effect of stress on psychosocial adjustment. This study aimed to examine relationships between stress, social support, self-efficacy, coping, and psychosocial adjustment to construct a model of the effect pathways between those factors, and determine if survivorship stage moderates those effects. Six hundred people with breast cancer completed questionnaires. Examined stages of survivorship after treatment were as follows: acute (i.e., 5 years). Stress (Perceived Stress Scale), social support (Multidimensional Scale of Perceived Social Support), self-efficacy (New General Self Efficacy Scale), coping (Ways of Coping Checklist), and psychosocial adjustment (Psychosocial Adjustment to Illness Scale-Self-Report-Korean Version) were measured. Self-efficacy significantly correlated with psychosocial adjustment in the acute survival stage (γ = -0.37, P psychosocial adjustment was greater in the acute (γ = -0.42, P psychosocial adjustment was stronger in the lasting survival stage (β = 0.42, P psychosocial adjustment of female breast cancer patients. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Hardiness scales in Iranian managers: evidence of incremental validity in relationships with the five factor model and with organizational and psychological adjustment.

    Science.gov (United States)

    Ghorbani, Nima; Watson, P J

    2005-06-01

    This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.

  6. Scalability of a Methodology for Generating Technical Trading Rules with GAPs Based on Risk-Return Adjustment and Incremental Training

    Science.gov (United States)

    de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.

    In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.

  7. Poly(N-isopropylacrylamide) hydrogel-based shape-adjustable polyimide films triggered by near-human-body temperature.

    Science.gov (United States)

    Huanqing Cui; Xuemin Du; Juan Wang; Tianhong Tang; Tianzhun Wu

    2016-08-01

    Hydrogel-based shape-adjustable films were successfully fabricated via grafting poly(N-isopropylacrylamide) (PNIPAM) onto one side of polyimide (PI) films. The prepared PI-g-PNIPAM films exhibited rapid, reversible, and repeatable bending/unbending property by heating to near-human-body temperature (37 °C) or cooling to 25 °C. The excellent property of PI-g-PNIPAM films resulted from a lower critical solution temperature (LCST) of PNIPAM at about 32 °C. Varying the thickness of PNIPAM hydrogel layer regulated the thermo-responsive shape bending degree and response speed of PI-g-PNIPAM films. The thermo-induced shrinkage of hydrogel layers can tune the curvature of PI films, which have potential applications in the field of wearable and implantable devices.

  8. Construction project investment control model based on instant information

    Institute of Scientific and Technical Information of China (English)

    WANG Xue-tong

    2006-01-01

    Change of construction conditions always influences project investment by causing the loss of construction work time and extending the duration. To resolve such problem as difficult dynamic control in work construction plan, this article presents a concept of instant optimization by ways of adjustment operation time of each working procedure to minimize investment change. Based on this concept, its mathematical model is established and a strict mathematical justification is performed. An instant optimization model takes advantage of instant information in the construction process to duly complete adjustment of construction; thus we maximize cost efficiency of project investment.

  9. Biodegradable and adjustable sol-gel glass based hybrid scaffolds from multi-armed oligomeric building blocks.

    Science.gov (United States)

    Kascholke, Christian; Hendrikx, Stephan; Flath, Tobias; Kuzmenka, Dzmitry; Dörfler, Hans-Martin; Schumann, Dirk; Gressenbuch, Mathias; Schulze, F Peter; Schulz-Siegmund, Michaela; Hacker, Michael C

    2017-11-01

    Biodegradability is a crucial characteristic to improve the clinical potential of sol-gel-derived glass materials. To this end, a set of degradable organic/inorganic class II hybrids from a tetraethoxysilane(TEOS)-derived silica sol and oligovalent cross-linker oligomers containing oligo(d,l-lactide) domains was developed and characterized. A series of 18 oligomers (Mn: 1100-3200Da) with different degrees of ethoxylation and varying length of oligoester units was established and chemical composition was determined. Applicability of an established indirect rapid prototyping method enabled fabrication of a total of 85 different hybrid scaffold formulations from 3-isocyanatopropyltriethoxysilane-functionalized macromers. In vitro degradation was analyzed over 12months and a continuous linear weight loss (0.2-0.5wt%/d) combined with only moderate material swelling was detected which was controlled by oligo(lactide) content and matrix hydrophilicity. Compressive strength (2-30MPa) and compressive modulus (44-716MPa) were determined and total content, oligo(ethylene oxide) content, oligo(lactide) content and molecular weight of the oligomeric cross-linkers as well as material porosity were identified as the main factors determining hybrid mechanics. Cytocompatibility was assessed by cell culture experiments with human adipose tissue-derived stem cells (hASC). Cell migration into the entire scaffold pore network was indicated and continuous proliferation over 14days was found. ALP activity linearly increased over 2weeks indicating osteogenic differentiation. The presented glass-based hybrid concept with precisely adjustable material properties holds promise for regenerative purposes. Adaption of degradation kinetics toward physiological relevance is still an unmet challenge of (bio-)glass engineering. We therefore present a glass-derived hybrid material with adjustable degradation. A flexible design concept based on degradable multi-armed oligomers was combined with an

  10. Capillary gas chromatographic separation of organic bases using a pH-adjusted basic water stationary phase.

    Science.gov (United States)

    Darko, Ernest; Thurbide, Kevin B

    2016-09-23

    The use of a pH-adjusted water stationary phase for analyzing organic bases in capillary gas chromatography (GC) is demonstrated. Through modifying the phase to typical values near pH 11.5, it is found that various organic bases are readily eluted and separated. Conversely, at the normal pH 7 operating level, they are not. Sodium hydroxide is found to be a much more stable base than ammonium hydroxide for altering the pH due to the higher volatility and evaporation of the latter. In the basic condition, such analytes are not ionized and are observed to produce good peak shapes even for injected masses down to about 20ng. By comparison, analyses on a conventional non-polar capillary GC column yield more peak tailing and only analyte masses of 1μg or higher are normally observed. Through carefully altering the pH, it is also found that the selectivity between analytes can be potentially further enhanced if their respective pKa values differ sufficiently. The analysis of different pharmaceutical and petroleum samples containing organic bases is demonstrated. Results indicate that this approach can potentially offer unique and beneficial selectivity in such analyses. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. SU-E-T-247: Multi-Leaf Collimator Model Adjustments Improve Small Field Dosimetry in VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Young, L; Yang, F [University of Washington, Seattle, WA (United States)

    2014-06-01

    Purpose: The Elekta beam modulator linac employs a 4-mm micro multileaf collimator (MLC) backed by a fixed jaw. Out-of-field dose discrepancies between treatment planning system (TPS) calculations and output water phantom measurements are caused by the 1-mm leaf gap required for all moving MLCs in a VMAT arc. In this study, MLC parameters are optimized to improve TPS out-of-field dose approximations. Methods: Static 2.4 cm square fields were created with a 1-mm leaf gap for MLCs that would normally park behind the jaw. Doses in the open field and leaf gap were measured with an A16 micro ion chamber and EDR2 film for comparison with corresponding point doses in the Pinnacle TPS. The MLC offset table and tip radius were adjusted until TPS point doses agreed with photon measurements. Improvements to the beam models were tested using static arcs consisting of square fields ranging from 1.6 to 14.0 cm, with 45° collimator rotation, and 1-mm leaf gap to replicate VMAT conditions. Gamma values for the 3-mm distance, 3% dose difference criteria were evaluated using standard QA procedures with a cylindrical detector array. Results: The best agreement in point doses within the leaf gap and open field was achieved by offsetting the default rounded leaf end table by 0.1 cm and adjusting the leaf tip radius to 13 cm. Improvements in TPS models for 6 and 10 MV photon beams were more significant for smaller field sizes 3.6 cm or less where the initial gamma factors progressively increased as field size decreased, i.e. for a 1.6cm field size, the Gamma increased from 56.1% to 98.8%. Conclusion: The MLC optimization techniques developed will achieve greater dosimetric accuracy in small field VMAT treatment plans for fixed jaw linear accelerators. Accurate predictions of dose to organs at risk may reduce adverse effects of radiotherapy.

  12. Evaluating a novel tiered scarcity adjusted water budget and pricing structure using a holistic systems modelling approach.

    Science.gov (United States)

    Sahin, Oz; Bertone, Edoardo; Beal, Cara; Stewart, Rodney A

    2018-06-01

    Population growth, coupled with declining water availability and changes in climatic conditions underline the need for sustainable and responsive water management instruments. Supply augmentation and demand management are the two main strategies used by water utilities. Water demand management has long been acknowledged as a least-cost strategy to maintain water security. This can be achieved in a variety of ways, including: i) educating consumers to limit their water use; ii) imposing restrictions/penalties; iii) using smart and/or efficient technologies; and iv) pricing mechanisms. Changing water consumption behaviours through pricing or restrictions is challenging as it introduces more social and political issues into the already complex water resources management process. This paper employs a participatory systems modelling approach for: (1) evaluating various forms of a proposed tiered scarcity adjusted water budget and pricing structure, and (2) comparing scenario outcomes against the traditional restriction policy regime. System dynamics modelling was applied since it can explicitly account for the feedbacks, interdependencies, and non-linear relations that inherently characterise the water tariff (price)-demand-revenue system. A combination of empirical water use data, billing data and customer feedback on future projected water bills facilitated the assessment of the suitability and likelihood of the adoption of scarcity-driven tariff options for a medium-sized city within Queensland, Australia. Results showed that the tiered scarcity adjusted water budget and pricing structure presented was preferable to restrictions since it could maintain water security more equitably with the lowest overall long-run marginal cost. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  14. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    NARCIS (Netherlands)

    Möller, M.; Obleitner, F.; Reijmer, C.H.; Pohjola, V.A.; Glowacki, P.; Kohler, J.

    2016-01-01

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available

  15. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  16. A Pre-Detection Based Anti-Collision Algorithm with Adjustable Slot Size Scheme for Tag Identification

    Directory of Open Access Journals (Sweden)

    Chiu-Kuo LIANG

    2015-06-01

    Full Text Available One of the research areas in RFID systems is a tag anti-collision protocol; how to reduce identification time with a given number of tags in the field of an RFID reader. There are two types of tag anti-collision protocols for RFID systems: tree based algorithms and slotted aloha based algorithms. Many anti-collision algorithms have been proposed in recent years, especially in tree based protocols. However, there still have challenges on enhancing the system throughput and stability due to the underlying technologies had faced different limitation in system performance when network density is high. Particularly, the tree based protocols had faced the long identification delay. Recently, a Hybrid Hyper Query Tree (H2QT protocol, which is a tree based approach, was proposed and aiming to speedup tag identification in large scale RFID systems. The main idea of H2QT is to track the tag response and try to predict the distribution of tag IDs in order to reduce collisions. In this paper, we propose a pre-detection tree based algorithm, called the Adaptive Pre-Detection Broadcasting Query Tree algorithm (APDBQT, to avoid those unnecessary queries. Our proposed APDBQT protocol can reduce not only the collisions but the idle cycles as well by using pre-detection scheme and adjustable slot size mechanism. The simulation results show that our proposed technique provides superior performance in high density environments. It is shown that the APDBQT is effective in terms of increasing system throughput and minimizing identification delay.

  17. A system dynamics model of China's electric power structure adjustment with constraints of PM10 emission reduction.

    Science.gov (United States)

    Guo, Xiaopeng; Ren, Dongfang; Guo, Xiaodan

    2018-06-01

    Recently, Chinese state environmental protection administration has brought out several PM10 reduction policies to control the coal consumption strictly and promote the adjustment of power structure. Under this new policy environment, a suitable analysis method is required to simulate the upcoming major shift of China's electric power structure. Firstly, a complete system dynamics model is built to simulate China's evolution path of power structure with constraints of PM10 reduction considering both technical and economical factors. Secondly, scenario analyses are conducted under different clean-power capacity growth rates to seek applicable policy guidance for PM10 reduction. The results suggest the following conclusions. (1) The proportion of thermal power installed capacity will decrease to 67% in 2018 with a dropping speed, and there will be an accelerated decline in 2023-2032. (2) The system dynamics model can effectively simulate the implementation of the policy, for example, the proportion of coal consumption in the forecast model is 63.3% (the accuracy rate is 95.2%), below policy target 65% in 2017. (3) China should promote clean power generation such as nuclear power to meet PM10 reduction target.

  18. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  19. Modelling the potential impact of a sugar-sweetened beverage tax on stroke mortality, costs and health-adjusted life years in South Africa

    Directory of Open Access Journals (Sweden)

    Mercy Manyema

    2016-05-01

    Full Text Available Abstract Background Stroke poses a growing human and economic burden in South Africa. Excess sugar consumption, especially from sugar-sweetened beverages (SSBs, has been associated with increased obesity and stroke risk. Research shows that price increases for SSBs can influence consumption and modelling evidence suggests that taxing SSBs has the potential to reduce obesity and related diseases. This study estimates the potential impact of an SSB tax on stroke-related mortality, costs and health-adjusted life years in South Africa. Methods A proportional multi-state life table-based model was constructed in Microsoft Excel (2010. We used consumption data from the 2012 South African National Health and Nutrition Examination Survey, previously published own and cross price elasticities of SSBs and energy balance equations to estimate changes in daily energy intake and BMI arising from increased SSB prices. Stroke relative risk, and prevalent years lived with disability estimates from the Global Burden of Disease Study and modelled disease epidemiology estimates from a previous study, were used to estimate the effect of the BMI changes on the burden of stroke. Results Our model predicts that an SSB tax may avert approximately 72 000 deaths, 550 000 stroke-related health-adjusted life years and over ZAR5 billion, (USD400 million in health care costs over 20 years (USD296-576 million. Over 20 years, the number of incident stroke cases may be reduced by approximately 85 000 and prevalent cases by about 13 000. Conclusions Fiscal policy has the potential, as part of a multi-faceted approach, to mitigate the growing burden of stroke in South Africa and contribute to the achievement of the target set by the Department of Health to reduce relative premature mortality (less than 60 years from non-communicable diseases by the year 2020.

  20. Structure-Based Turbulence Model

    National Research Council Canada - National Science Library

    Reynolds, W

    2000-01-01

    .... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...

  1. The impact of highway base-saturation flow rate adjustment on Kuwait's transport and environmental parameters estimation.

    Science.gov (United States)

    AlRukaibi, Fahad; AlKheder, Sharaf; Al-Rukaibi, Duaij; Al-Burait, Abdul-Aziz

    2018-03-23

    Traditional transportation systems' management and operation mainly focused on improving traffic mobility and safety without imposing any environmental concerns. Transportation and environmental issues are interrelated and affected by the same parameters especially at signalized intersections. Additionally, traffic congestion at signalized intersections has a major contribution in the environmental problem as related to vehicle emission, fuel consumption, and delay. Therefore, signalized intersections' design and operation is an important parameter to minimize the impact on the environment. The design and operation of signalized intersections are highly dependent on the base saturation flow rate (BSFR). Highway Capacity Manual (HCM) uses a base-saturation flow rate of 1900-passenger car/h/lane for areas with a population intensity greater than or equal to 250,000 and a value of 1750-passenger car/h/lane for less populated areas. The base-saturation flow rate value in HCM is derived from a field data collected in developed countries. The adopted value in Kuwait is 1800passengercar/h/lane, which is the value that used in this analysis as a basis for comparison. Due to the difference in behavior between drivers in developed countries and their fellows in Kuwait, an adjustment was made to the base-saturation flow rate to represent Kuwait's traffic and environmental conditions. The reduction in fuel consumption and vehicles' emission after modifying the base-saturation flow rate (BSFR increased by 12.45%) was about 34% on average. Direct field measurements of the saturation flow rate were used while using the air quality mobile lab to calculate emissions' rates. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  3. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  4. Introducing risk adjustment and free health plan choice in employer-based health insurance: Evidence from Germany.

    Science.gov (United States)

    Pilny, Adam; Wübker, Ansgar; Ziebarth, Nicolas R

    2017-12-01

    To equalize differences in health plan premiums due to differences in risk pools, the German legislature introduced a simple Risk Adjustment Scheme (RAS) based on age, gender and disability status in 1994. In addition, effective 1996, consumers gained the freedom to choose among hundreds of existing health plans, across employers and state-borders. This paper (a) estimates RAS pass-through rates on premiums, financial reserves, and expenditures and assesses the overall RAS impact on market price dispersion. Moreover, it (b) characterizes health plan switchers and investigates their annual and cumulative switching rates over time. Our main findings are based on representative enrollee panel data linked to administrative RAS and health plan data. We show that sickness funds with bad risk pools and high pre-RAS premiums lowered their total premiums by 42 cents per additional euro allocated by the RAS. Consequently, post-RAS, health plan prices converged but not fully. Because switchers are more likely to be white collar, young and healthy, the new consumer choice resulted in more risk segregation and the amount of money redistributed by the RAS increased over time. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Validation of CLIC Re-Adjustment System Based on Eccentric Cam Movers One Degree of Freedom Mock-Up

    CERN Document Server

    Kemppinen, J; Lackner, F

    2011-01-01

    Compact Linear Collider (CLIC) is a 48 km long linear accelerator currently studied at CERN. It is a high luminosity electron-positron collider with an energy range of 0.5-3 TeV. CLIC is based on a two-beam technology in which a high current drive beam transfers RF power to the main beam accelerating structures. The main beam is steered with quadrupole magnets. To reach CLIC target luminosity, the main beam quadrupoles have to be actively pre-aligned within 17 µm in 5 degrees of freedom and actively stabilised at 1 nm in vertical above 1 Hz. To reach the pre-alignment requirement as well as the rigidity required by nano-stabilisation, a system based on eccentric cam movers is proposed for the re-adjustment of the main beam quadrupoles. Validation of the technique to the stringent CLIC requirements was started with tests in one degree of freedom on an eccentric cam mover. This paper describes the dedicated mock-up as well as the tests and measurements carried out with it. Finally, the test results are present...

  6. Spectrally adjustable quasi-monochromatic radiance source based on LEDs and its application for measuring spectral responsivity of a luminance meter

    International Nuclear Information System (INIS)

    Hirvonen, Juha-Matti; Poikonen, Tuomas; Vaskuri, Anna; Kärhä, Petri; Ikonen, Erkki

    2013-01-01

    A spectrally adjustable radiance source based on light-emitting diodes (LEDs) has been constructed for spectral responsivity measurements of radiance and luminance meters. A 300 mm integrating sphere source with adjustable output port is illuminated using 30 thermally stabilized narrow-band LEDs covering the visible wavelength range of 380–780 nm. The functionality of the measurement setup is demonstrated by measuring the relative spectral responsivities of a luminance meter and a photometer head with cosine-corrected input optics. (paper)

  7. Identifying the contents of a type 1 diabetes outpatient care program based on the self-adjustment of insulin using the Delphi method.

    Science.gov (United States)

    Kubota, Mutsuko; Shindo, Yukari; Kawaharada, Mariko

    2014-10-01

    The objective of this study is to identify the items necessary for an outpatient care program based on the self-adjustment of insulin for type 1 diabetes patients. Two surveys based on the Delphi method were conducted. The survey participants were 41 certified diabetes nurses in Japan. An outpatient care program based on the self-adjustment of insulin was developed based on pertinent published work and expert opinions. There were a total of 87 survey items in the questionnaire, which was developed based on the care program mentioned earlier, covering matters such as the establishment of prerequisites and a cooperative relationship, the basics of blood glucose pattern management, learning and practice sessions for the self-adjustment of insulin, the implementation of the self-adjustment of insulin, and feedback. The participants' approval on items in the questionnaires was defined at 70%. Participants agreed on all of the items in the first survey. Four new parameters were added to make a total of 91 items for the second survey and participants agreed on the inclusion of 84 of them. Items necessary for a type 1 diabetes outpatient care program based on self-adjustment of insulin were subsequently selected. It is believed that this care program received a fairly strong approval from certified diabetes nurses; however, it will be necessary to have the program further evaluated in conjunction with intervention studies in the future. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.

  8. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  9. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China.

    Directory of Open Access Journals (Sweden)

    Ping Yu

    Full Text Available A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke.The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient.The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82, and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84. Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008.The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.

  10. A class frequency mixture model that adjusts for site-specific amino acid frequencies and improves inference of protein phylogeny

    Directory of Open Access Journals (Sweden)

    Li Karen

    2008-12-01

    Full Text Available Abstract Background Widely used substitution models for proteins, such as the Jones-Taylor-Thornton (JTT or Whelan and Goldman (WAG models, are based on empirical amino acid interchange matrices estimated from databases of protein alignments that incorporate the average amino acid frequencies of the data set under examination (e.g JTT + F. Variation in the evolutionary process between sites is typically modelled by a rates-across-sites distribution such as the gamma (Γ distribution. However, sites in proteins also vary in the kinds of amino acid interchanges that are favoured, a feature that is ignored by standard empirical substitution matrices. Here we examine the degree to which the pattern of evolution at sites differs from that expected based on empirical amino acid substitution models and evaluate the impact of these deviations on phylogenetic estimation. Results We analyzed 21 large protein alignments with two statistical tests designed to detect deviation of site-specific amino acid distributions from data simulated under the standard empirical substitution model: JTT+ F + Γ. We found that the number of states at a given site is, on average, smaller and the frequencies of these states are less uniform than expected based on a JTT + F + Γ substitution model. With a four-taxon example, we show that phylogenetic estimation under the JTT + F + Γ model is seriously biased by a long-branch attraction artefact if the data are simulated under a model utilizing the observed site-specific amino acid frequencies from an alignment. Principal components analyses indicate the existence of at least four major site-specific frequency classes in these 21 protein alignments. Using a mixture model with these four separate classes of site-specific state frequencies plus a fifth class of global frequencies (the JTT + cF + Γ model, significant improvements in model fit for real data sets can be achieved. This simple mixture model also reduces the long

  11. A Measurement Study of BLE iBeacon and Geometric Adjustment Scheme for Indoor Location-Based Mobile Applications

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2016-01-01

    Full Text Available Bluetooth Low Energy (BLE and the iBeacons have recently gained large interest for enabling various proximity-based application services. Given the ubiquitously deployed nature of Bluetooth devices including mobile smartphones, using BLE and iBeacon technologies seemed to be a promising future to come. This work started off with the belief that this was true: iBeacons could provide us with the accuracy in proximity and distance estimation to enable and simplify the development of many previously difficult applications. However, our empirical studies with three different iBeacon devices from various vendors and two types of smartphone platforms prove that this is not the case. Signal strength readings vary significantly over different iBeacon vendors, mobile platforms, environmental or deployment factors, and usage scenarios. This variability in signal strength naturally complicates the process of extracting an accurate location/proximity estimation in real environments. Our lessons on the limitations of iBeacon technique lead us to design a simple class attendance checking application by performing a simple form of geometric adjustments to compensate for the natural variations in beacon signal strength readings. We believe that the negative observations made in this work can provide future researchers with a reference on how well of a performance to expect from iBeacon devices as they enter their system design phases.

  12. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China.

    Science.gov (United States)

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient. The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84). Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, pcase-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.

  13. A Software Tool for Estimation of Burden of Infectious Diseases in Europe Using Incidence-Based Disability Adjusted Life Years.

    Science.gov (United States)

    Colzani, Edoardo; Cassini, Alessandro; Lewandowski, Daniel; Mangen, Marie-Josee J; Plass, Dietrich; McDonald, Scott A; van Lier, Alies; Haagsma, Juanita A; Maringhini, Guido; Pini, Alessandro; Kramarz, Piotr; Kretzschmar, Mirjam E

    2017-01-01

    The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability-Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this methodology poses a challenge. The aim of the Burden of Communicable Disease in Europe (BCoDE) project is to summarize the impact of communicable disease in the European Union and European Economic Area Member States (EU/EEA MS). To meet this goal, a user-friendly software tool (BCoDE toolkit), was developed. This stand-alone application, written in C++, is open-access and freely available for download from the website of the European Centre for Disease Prevention and Control (ECDC). With the BCoDE toolkit, one can calculate DALYs by simply entering the age group- and sex-specific number of cases for one or more of selected sets of 32 communicable diseases (CDs) and 6 healthcare associated infections (HAIs). Disease progression models (i.e., outcome trees) for these communicable diseases were created following a thorough literature review of their disease progression pathway. The BCoDE toolkit runs Monte Carlo simulations of the input parameters and provides disease-specific results, including 95% uncertainty intervals, and permits comparisons between the different disease models entered. Results can be displayed as mean and median overall DALYs, DALYs per 100,000 population, and DALYs related to mortality vs. disability. Visualization options summarize complex epidemiological data, with the goal of improving communication and knowledge transfer for decision-making.

  14. A Software Tool for Estimation of Burden of Infectious Diseases in Europe Using Incidence-Based Disability Adjusted Life Years.

    Directory of Open Access Journals (Sweden)

    Edoardo Colzani

    Full Text Available The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability-Adjusted Life Years (DALYs. However, calculating, interpreting and communicating the results of studies using this methodology poses a challenge. The aim of the Burden of Communicable Disease in Europe (BCoDE project is to summarize the impact of communicable disease in the European Union and European Economic Area Member States (EU/EEA MS. To meet this goal, a user-friendly software tool (BCoDE toolkit, was developed. This stand-alone application, written in C++, is open-access and freely available for download from the website of the European Centre for Disease Prevention and Control (ECDC. With the BCoDE toolkit, one can calculate DALYs by simply entering the age group- and sex-specific number of cases for one or more of selected sets of 32 communicable diseases (CDs and 6 healthcare associated infections (HAIs. Disease progression models (i.e., outcome trees for these communicable diseases were created following a thorough literature review of their disease progression pathway. The BCoDE toolkit runs Monte Carlo simulations of the input parameters and provides disease-specific results, including 95% uncertainty intervals, and permits comparisons between the different disease models entered. Results can be displayed as mean and median overall DALYs, DALYs per 100,000 population, and DALYs related to mortality vs. disability. Visualization options summarize complex epidemiological data, with the goal of improving communication and knowledge transfer for decision-making.

  15. Group-based parent-training programmes for improving emotional and behavioural adjustment in children from birth to three years old.

    Science.gov (United States)

    Barlow, Jane; Smailagic, Nadja; Ferriter, Michael; Bennett, Cathy; Jones, Hannah

    2010-03-17

    Emotional and behavioural problems in children are common. Research suggests that parenting has an important role to play in helping children to become well-adjusted, and that the first few months and years are especially important. Parenting programmes may have a role to play in improving the emotional and behavioural adjustment of infants and toddlers. This review is applicable to parents and carers of children up to three years eleven months although some studies included children up to five years old. To:a) establish whether group-based parenting programmes are effective in improving the emotional and behavioural adjustment of children three years of age or less (i.e. maximum mean age of 3 years 11 months); b) assess the role of parenting programmes in the primary prevention of emotional and behavioural problems. We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, Sociofile, Social Science Citation Index, ASSIA, National Research Register (NRR) and ERIC. The searches were originally run in 2000 and then updated in 2007/8. Randomised controlled trials of group-based parenting programmes that had used at least one standardised instrument to measure emotional and behavioural adjustment. The results for each outcome in each study have been presented, with 95% confidence intervals. Where appropriate the results have been combined in a meta-analysis using a random-effects model. Eight studies were included in the review. There were sufficient data from six studies to combine the results in a meta-analysis for parent-reports and from three studies to combine the results for independent assessments of children's behaviour post-intervention. There was in addition, sufficient information from three studies to conduct a meta-analysis of both parent-report and independent follow-up data. Both parent-report (SMD -0.25; CI -0.45 to -0.06), and independent observations (SMD -0.54; CI -0.84 to -0.23) of children's behaviour produce significant results favouring the

  16. Applying the Transactional Stress and Coping Model to Sickle Cell Disorder and Insulin-Dependent Diabetes Mellitus: Identifying Psychosocial Variables Related to Adjustment and Intervention

    Science.gov (United States)

    Hocking, Matthew C.; Lochman, John E.

    2005-01-01

    This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…

  17. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  18. Kriging modeling and SPSA adjusting PID with KPWF compensator control of IPMC gripper for mm-sized objects

    Science.gov (United States)

    Chen, Yang; Hao, Lina; Yang, Hui; Gao, Jinhai

    2017-12-01

    Ionic polymer metal composite (IPMC) as a new smart material has been widely concerned in the micromanipulation field. In this paper, a novel two-finger gripper which contains an IPMC actuator and an ultrasensitive force sensor is proposed and fabricated. The IPMC as one finger of the gripper for mm-sized objects can achieve gripping and releasing motion, and the other finger works not only as a support finger but also as a force sensor. Because of the feedback signal of the force sensor, this integrated actuating and sensing gripper can complete gripping miniature objects in millimeter scale. The Kriging model is used to describe nonlinear characteristics of the IPMC for the first time, and then the control scheme called simultaneous perturbation stochastic approximation adjusting a proportion integration differentiation parameter controller with a Kriging predictor wavelet filter compensator is applied to track the gripping force of the gripper. The high precision force tracking in the foam ball manipulation process is obtained on a semi-physical experimental platform, which demonstrates that this gripper for mm-sized objects can work well in manipulation applications.

  19. The national burden of cerebrovascular diseases in Spain: a population-based study using disability-adjusted life years.

    Science.gov (United States)

    Catalá-López, Ferrán; Fernández de Larrea-Baz, Nerea; Morant-Ginestar, Consuelo; Álvarez-Martín, Elena; Díaz-Guzmán, Jaime; Gènova-Maleras, Ricard

    2015-04-20

    The aim of the present study was to determine the national burden of cerebrovascular diseases in the adult population of Spain. Cross-sectional, descriptive population-based study. We calculated the disability-adjusted life years (DALY) metric using country-specific data from national statistics and epidemiological studies to obtain representative outcomes for the Spanish population. DALYs were divided into years of life lost due to premature mortality (YLLs) and years of life lived with disability (YLDs). DALYs were estimated for the year 2008 by applying demographic structure by sex and age-groups, cause-specific mortality, morbidity data and new disability weights proposed in the recent Global Burden of Disease study. In the base case, neither YLLs nor YLDs were discounted or age-weighted. Uncertainty around DALYs was tested using sensitivity analyses. In Spain, cerebrovascular diseases generated 418,052 DALYs, comprising 337,000 (80.6%) YLLs and 81,052 (19.4%) YLDs. This accounts for 1,113 DALYs per 100,000 population (men: 1,197 and women: 1,033) and 3,912 per 100,000 in those over the age of 65 years (men: 4,427 and women: 2,033). Depending on the standard life table and choice of social values used for calculation, total DALYs varied by 15.3% and 59.9% below the main estimate. Estimates provided here represent a comprehensive analysis of the burden of cerebrovascular diseases at a national level. Prevention and control programmes aimed at reducing the disease burden merit further priority in Spain. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  20. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    International Nuclear Information System (INIS)

    Drouin, M.

    2009-11-01

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  1. Hemorrhage-Adjusted Iron Requirements, Hematinics and Hepcidin Define Hereditary Hemorrhagic Telangiectasia as a Model of Hemorrhagic Iron Deficiency

    Science.gov (United States)

    Finnamore, Helen; Le Couteur, James; Hickson, Mary; Busbridge, Mark; Whelan, Kevin; Shovlin, Claire L.

    2013-01-01

    Background Iron deficiency anemia remains a major global health problem. Higher iron demands provide the potential for a targeted preventative approach before anemia develops. The primary study objective was to develop and validate a metric that stratifies recommended dietary iron intake to compensate for patient-specific non-menstrual hemorrhagic losses. The secondary objective was to examine whether iron deficiency can be attributed to under-replacement of epistaxis (nosebleed) hemorrhagic iron losses in hereditary hemorrhagic telangiectasia (HHT). Methodology/Principal Findings The hemorrhage adjusted iron requirement (HAIR) sums the recommended dietary allowance, and iron required to replace additional quantified hemorrhagic losses, based on the pre-menopausal increment to compensate for menstrual losses (formula provided). In a study population of 50 HHT patients completing concurrent dietary and nosebleed questionnaires, 43/50 (86%) met their recommended dietary allowance, but only 10/50 (20%) met their HAIR. Higher HAIR was a powerful predictor of lower hemoglobin (p = 0.009), lower mean corpuscular hemoglobin content (pstopped. Conclusions/significance HAIR values, providing an indication of individuals’ iron requirements, may be a useful tool in prevention, assessment and management of iron deficiency. Iron deficiency in HHT can be explained by under-replacement of nosebleed hemorrhagic iron losses. PMID:24146883

  2. Hemorrhage-adjusted iron requirements, hematinics and hepcidin define hereditary hemorrhagic telangiectasia as a model of hemorrhagic iron deficiency.

    Directory of Open Access Journals (Sweden)

    Helen Finnamore

    Full Text Available Iron deficiency anemia remains a major global health problem. Higher iron demands provide the potential for a targeted preventative approach before anemia develops. The primary study objective was to develop and validate a metric that stratifies recommended dietary iron intake to compensate for patient-specific non-menstrual hemorrhagic losses. The secondary objective was to examine whether iron deficiency can be attributed to under-replacement of epistaxis (nosebleed hemorrhagic iron losses in hereditary hemorrhagic telangiectasia (HHT.The hemorrhage adjusted iron requirement (HAIR sums the recommended dietary allowance, and iron required to replace additional quantified hemorrhagic losses, based on the pre-menopausal increment to compensate for menstrual losses (formula provided. In a study population of 50 HHT patients completing concurrent dietary and nosebleed questionnaires, 43/50 (86% met their recommended dietary allowance, but only 10/50 (20% met their HAIR. Higher HAIR was a powerful predictor of lower hemoglobin (p = 0.009, lower mean corpuscular hemoglobin content (p<0.001, lower log-transformed serum iron (p = 0.009, and higher log-transformed red cell distribution width (p<0.001. There was no evidence of generalised abnormalities in iron handling Ferritin and ferritin(2 explained 60% of the hepcidin variance (p<0.001, and the mean hepcidinferritin ratio was similar to reported controls. Iron supplement use increased the proportion of individuals meeting their HAIR, and blunted associations between HAIR and hematinic indices. Once adjusted for supplement use however, reciprocal relationships between HAIR and hemoglobin/serum iron persisted. Of 568 individuals using iron tablets, most reported problems completing the course. For patients with hereditary hemorrhagic telangiectasia, persistent anemia was reported three-times more frequently if iron tablets caused diarrhea or needed to be stopped.HAIR values, providing an indication of

  3. Scenario analysis of carbon emissions' anti-driving effect on Qingdao's energy structure adjustment with an optimization model, Part II: Energy system planning and management.

    Science.gov (United States)

    Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G

    2017-03-01

    In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO 2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    Science.gov (United States)

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  5. In search of laterally heterogeneous viscosity models of Glacial Isostatic Adjustment with the ICE-6G_C global ice history model

    Science.gov (United States)

    Li, Tanghua; Wu, Patrick; Steffen, Holger; Wang, Hansheng

    2018-05-01

    Most models of Glacial Isostatic Adjustment (GIA) assume that the Earth is laterally homogeneous. However, seismic and geological observations clearly show that the Earth's mantle is laterally heterogeneous. Previous studies of GIA with lateral heterogeneity mostly focused on its effect or sensitivity on GIA predictions, and it is not clear to what extent can lateral heterogeneity solve the misfits between GIA predictions and observations. Our aim is to search for the best 3D viscosity models that can simultaneously fit the global relative sea-level (RSL) data, the peak uplift rates (u-dot from GNSS) and peak gravity-rate-of-change (g-dot from the GRACE satellite mission) in Laurentia and Fennoscandia. However, the search is dependent on the ice and viscosity model inputs - the latter depends on the background viscosity and the seismic tomography models used. In this paper, the ICE-6G_C ice model, with Bunge & Grand's seismic tomography model and background viscosity models close to VM5 will be assumed. A Coupled Laplace-Finite Element Method is used to compute gravitationally self-consistent sea level change with time dependent coastlines and rotational feedback in addition to changes in deformation, gravity and the state of stress. Several laterally heterogeneous models are found to fit the global sea level data better than laterally homogeneous models. Two of these laterally heterogeneous models also fit the ICE-6G_C peak g-dot and u-dot rates observed in Laurentia simultaneously. However, even with the introduction of lateral heterogeneity, no model that is able to fit the present-day g-dot and uplift rate data in Fennoscandia has been found. Therefore, either the ice history of ICE-6G_C in Fennoscandia and Barent Sea needs some modifications, or the sub-lithospheric property/non-thermal effect underneath northern Europe must be different from that underneath Laurentia.

  6. Development of methodology for disability-adjusted life years (DALYs calculation based on real-life data.

    Directory of Open Access Journals (Sweden)

    Ellen A Struijk

    Full Text Available BACKGROUND: Disability-Adjusted Life Years (DALYs have the advantage that effects on total health instead of on a specific disease incidence or mortality can be estimated. Our aim was to address several methodological points related to the computation of DALYs at an individual level in a follow-up study. METHODS: DALYs were computed for 33,507 men and women aged 20-70 years when participating in the EPIC-NL study in 1993-7. DALYs are the sum of the Years Lost due to Disability (YLD and the Years of Life Lost (YLL due to premature mortality. Premature mortality was defined as death before the estimated date of individual Life Expectancy (LE. Different methods to compute LE were compared as well as the effect of different follow-up periods using a two-part model estimating the effect of smoking status on health as an example. RESULTS: During a mean follow-up of 12.4 years, there were 69,245 DALYs due to years lived with a disease or premature death. Current-smokers had lost 1.28 healthy years of their life (1.28 DALYs 95%CI 1.10; 1.46 compared to never-smokers. The outcome varied depending on the method used for estimating LE, completeness of disease and mortality ascertainment and notably the percentage of extinction (duration of follow-up of the cohort. CONCLUSION: We conclude that the use of DALYs in a cohort study is an appropriate way to assess total disease burden in relation to a determinant. The outcome is sensitive to the LE calculation method and the follow-up duration of the cohort.

  7. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...

  8. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  9. The effectiveness of group therapy based on quality of life on marital adjustment, marital satisfaction and mood regulation of Bushehr Male abusers

    Directory of Open Access Journals (Sweden)

    yoseph Dehghani

    2016-07-01

    Full Text Available Background: The purpose of this research was to study the The effectiveness of group therapy based on quality of life on marital adjustment, marital satisfaction and mood regulation of Bushehr Male abusers. Materials and Methods: In this study which was a quasi-experimental pre-test, post-test with control group, the sample group was selected by clustering sampling method from the men who referred to Bushehr addiction treatment clinics that among them a total of 30 patients randomly divided into two experimental and control groups of 15 individuals. The instrument included short version of the Marital Adjustment Questionnaire, Marital Satisfaction Questionnaire and Garnefski Emotional Regulation Scale that was completed by the participants in the pre-test and post-test stages.The experimental group was treated based on group life quality in eight sessions but the control group did not receive any treatment. Multi-variate covariance analysis is used for statistical analysis of data. Results: The results revealed that after intervention there was a significant difference between two groups in terms of marital adjustment, marital satisfaction and emotional regulation variables (P<0/001.The rate of marital adjustment, marital satisfaction and emotional regulation in experimental group compare with control group and it was significantly higher in post-test.  Conclusion: treatment based on quality of life which have formed from combination of positive psychology and cognitive-behavioral approach can increase marital adjustment, marital satisfaction and mood regulation of abusers.

  10. Population-Based Estimates of Decreases in Quality-Adjusted Life Expectancy Associated with Unhealthy Body Mass Index.

    Science.gov (United States)

    Jia, Haomiao; Zack, Matthew M; Thompson, William W

    2016-01-01

    Being classified as outside the normal range for body mass index (BMI) has been associated with increased risk for chronic health conditions, poor health-related quality of life (HRQOL), and premature death. To assess the impact of BMI on HRQOL and mortality, we compared quality-adjusted life expectancy (QALE) by BMI levels. We obtained HRQOL data from the 1993-2010 Behavioral Risk Factor Surveillance System and life table estimates from the National Center for Health Statistics national mortality files to estimate QALE among U.S. adults by BMI categories: underweight (BMI overweight (BMI 25.0-29.9 kg/m(2)), obese (BMI 30.0-34.9 kg/m(2)), and severely obese (BMI ≥35.0 kg/m(2)). In 2010 in the United States, the highest estimated QALE for adults at 18 years of age was 54.1 years for individuals classified as normal weight. The two lowest QALE estimates were for those classified as either underweight (48.9 years) or severely obese (48.2 years). For individuals who were overweight or obese, the QALE estimates fell between those classified as either normal weight (54.1 years) or severely obese (48.2 years). The difference in QALE between adults classified as normal weight and those classified as either overweight or obese was significantly higher among women than among men, irrespective of race/ethnicity. Using population-based data, we found significant differences in QALE loss by BMI category. These findings are valuable for setting national and state targets to reduce health risks associated with severe obesity, and could be used for cost-effectiveness evaluations of weight-reduction interventions.

  11. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  12. Adjustable, physiological ventricular restraint improves left ventricular mechanics and reduces dilatation in an ovine model of chronic heart failure.

    Science.gov (United States)

    Ghanta, Ravi K; Rangaraj, Aravind; Umakanthan, Ramanan; Lee, Lawrence; Laurence, Rita G; Fox, John A; Bolman, R Morton; Cohn, Lawrence H; Chen, Frederick Y

    2007-03-13

    Ventricular restraint is a nontransplantation surgical treatment for heart failure. The effect of varying restraint level on left ventricular (LV) mechanics and remodeling is not known. We hypothesized that restraint level may affect therapy efficacy. We studied the immediate effect of varying restraint levels in an ovine heart failure model. We then studied the long-term effect of restraint applied over a 2-month period. Restraint level was quantified by use of fluid-filled epicardial balloons placed around the ventricles and measurement of balloon luminal pressure at end diastole. At 4 different restraint levels (0, 3, 5, and 8 mm Hg), transmural myocardial pressure (P(tm)) and indices of myocardial oxygen consumption (MVO2) were determined in control (n=5) and ovine heart failure (n=5). Ventricular restraint therapy decreased P(tm) and MVO2, and improved mechanical efficiency. An optimal physiological restraint level of 3 mm Hg was identified to maximize improvement without an adverse affect on systemic hemodynamics. At this optimal level, end-diastolic P(tm) and MVO2 indices decreased by 27% and 20%, respectively. The serial longitudinal effects of optimized ventricular restraint were then evaluated in ovine heart failure with (n=3) and without (n=3) restraint over 2 months. Optimized ventricular restraint prevented and reversed pathological LV dilatation (130+/-22 mL to 91+/-18 mL) and improved LV ejection fraction (27+/-3% to 43+/-5%). Measured restraint level decreased over time as the LV became smaller, and reverse remodeling slowed. Ventricular restraint level affects the degree of decrease in P(tm), the degree of decrease in MVO2, and the rate of LV reverse remodeling. Periodic physiological adjustments of restraint level may be required for optimal restraint therapy efficacy.

  13. Cost Effectiveness of Childhood Cochlear Implantation and Deaf Education in Nicaragua: A Disability Adjusted Life Year Model.

    Science.gov (United States)

    Saunders, James E; Barrs, David M; Gong, Wenfeng; Wilson, Blake S; Mojica, Karen; Tucci, Debara L

    2015-09-01

    Cochlear implantation (CI) is a common intervention for severe-to-profound hearing loss in high-income countries, but is not commonly available to children in low resource environments. Owing in part to the device costs, CI has been assumed to be less economical than deaf education for low resource countries. The purpose of this study is to compare the cost effectiveness of the two interventions for children with severe-to-profound sensorineural hearing loss (SNHL) in a model using disability adjusted life years (DALYs). Cost estimates were derived from published data, expert opinion, and known costs of services in Nicaragua. Individual costs and lifetime DALY estimates with a 3% discounting rate were applied to both two interventions. Sensitivity analysis was implemented to evaluate the effect on the discounted cost of five key components: implant cost, audiology salary, speech therapy salary, number of children implanted per year, and device failure probability. The costs per DALY averted are $5,898 and $5,529 for CI and deaf education, respectively. Using standards set by the WHO, both interventions are cost effective. Sensitivity analysis shows that when all costs set to maximum estimates, CI is still cost effective. Using a conservative DALY analysis, both CI and deaf education are cost-effective treatment alternatives for severe-to-profound SNHL. CI intervention costs are not only influenced by the initial surgery and device costs but also by rehabilitation costs and the lifetime maintenance, device replacement, and battery costs. The major CI cost differences in this low resource setting were increased initial training and infrastructure costs, but lower medical personnel and surgery costs.

  14. Enhancing the performance of blue GaN-based light emitting diodes with carrier concentration adjusting layer

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yao; Huang, Yang; Wang, Junxi; Wang, Guohong [R& D Center for Semiconductor Lighting, Chinese Academy of Sciences, Beijing 100083,P. R. China (China); Liu, Zhiqiang, E-mail: lzq@semi.ac.cn, E-mail: spring@semi.ac.cn; Yi, Xiaoyan, E-mail: lzq@semi.ac.cn, E-mail: spring@semi.ac.cn; Li, Jinmin [R& D Center for Semiconductor Lighting, Chinese Academy of Sciences, Beijing 100083,P. R. China (China); State Key Laboratory of Solid State Lighting, Beijing 100083 (China); Beijing Engineering Research Center for the 3rd Generation Semiconductor Materials and Application, Beijing 100083 (China)

    2016-03-15

    In this work, a novel carrier concentration adjusting insertion layer for InGaN/GaN multiple quantum wells light-emitting diodes was proposed to mitigate the efficiency droop and improve optical output properties at high current density. The band diagrams and carrier distributions were investigated numerically and experimentally. The results indicate that due to the newly formed electron barrier and the adjusted built-in field near the active region, the hole injection has been improved and a better radiative recombination can be achieved. Compared to the conventional LED, the light output power of our new structure with the carrier concentration adjusting layers is enhanced by 127% at 350 mA , while the efficiency only droops to be 88.2% of its peak efficiency.

  15. Performance evaluation of inpatient service in Beijing: a horizontal comparison with risk adjustment based on Diagnosis Related Groups.

    Science.gov (United States)

    Jian, Weiyan; Huang, Yinmin; Hu, Mu; Zhang, Xiumei

    2009-04-30

    The medical performance evaluation, which provides a basis for rational decision-making, is an important part of medical service research. Current progress with health services reform in China is far from satisfactory, without sufficient regulation. To achieve better progress, an effective tool for evaluating medical performance needs to be established. In view of this, this study attempted to develop such a tool appropriate for the Chinese context. Data was collected from the front pages of medical records (FPMR) of all large general public hospitals (21 hospitals) in the third and fourth quarter of 2007. Locally developed Diagnosis Related Groups (DRGs) were introduced as a tool for risk adjustment and performance evaluation indicators were established: Charge Efficiency Index (CEI), Time Efficiency Index (TEI) and inpatient mortality of low-risk group cases (IMLRG), to reflect respectively work efficiency and medical service quality. Using these indicators, the inpatient services' performance was horizontally compared among hospitals. Case-mix Index (CMI) was used to adjust efficiency indices and then produce adjusted CEI (aCEI) and adjusted TEI (aTEI). Poisson distribution analysis was used to test the statistical significance of the IMLRG differences between different hospitals. Using the aCEI, aTEI and IMLRG scores for the 21 hospitals, Hospital A and C had relatively good overall performance because their medical charges were lower, LOS shorter and IMLRG smaller. The performance of Hospital P and Q was the worst due to their relatively high charge level, long LOS and high IMLRG. Various performance problems also existed in the other hospitals. It is possible to develop an accurate and easy to run performance evaluation system using Case-Mix as the tool for risk adjustment, choosing indicators close to consumers and managers, and utilizing routine report forms as the basic information source. To keep such a system running effectively, it is necessary to

  16. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Science.gov (United States)

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  17. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    Science.gov (United States)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  18. A Test of the Family Stress Model on Toddler-Aged Children's Adjustment among Hurricane Katrina Impacted and Nonimpacted Low-Income Families

    Science.gov (United States)

    Scaramella, Laura V.; Sohr-Preston, Sara L.; Callahan, Kristin L.; Mirabile, Scott P.

    2008-01-01

    Hurricane Katrina dramatically altered the level of social and environmental stressors for the residents of the New Orleans area. The Family Stress Model describes a process whereby felt financial strain undermines parents' mental health, the quality of family relationships, and child adjustment. Our study considered the extent to which the Family…

  19. Adjustments in the Almod 3W2 code models for reproducing the net load trip test in Angra I nuclear power plant

    International Nuclear Information System (INIS)

    Camargo, C.T.M.; Madeira, A.A.; Pontedeiro, A.C.; Dominguez, L.

    1986-09-01

    The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author) [pt

  20. Do Substance Use, Psychosocial Adjustment, and Sexual Experiences Vary for Dating Violence Victims Based on Type of Violent Relationships?

    Science.gov (United States)

    Zweig, Janine M.; Yahner, Jennifer; Dank, Meredith; Lachman, Pamela

    2016-01-01

    Background: We examined whether substance use, psychosocial adjustment, and sexual experiences vary for teen dating violence victims by the type of violence in their relationships. We compared dating youth who reported no victimization in their relationships to those who reported being victims of intimate terrorism (dating violence involving one…

  1. A software tool for estimation of burden of infectious diseases in Europe using incidence-based disability adjusted life years

    NARCIS (Netherlands)

    Colzani, Edoardo; Cassini, Alessandro; Lewandowski, Daniel; Mangen, Marie Josee J.; Plass, Dietrich; McDonald, Scott A; van Lier, Alies; Haagsma, Juanita A.; Maringhini, Guido; Pini, Alessandro; Kramarz, Piotr; Kretzschmar, Mirjam E.

    2017-01-01

    The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this methodology

  2. A software tool for estimation of burden of infectious diseases in Europe using incidence-based disability adjusted life years

    NARCIS (Netherlands)

    Colzani, E. (Edoardo); A. Cassini (Alessandro); D. Lewandowski (Daniel); M.J.J. Mangen; Plass, D. (Dietrich); S.A. McDonald (Scott); R.A.W. Van Lier (Rene A. W.); J.A. Haagsma (Juanita); Maringhini, G. (Guido); Pini, A. (Alessandro); P Kramarz (Piotr); M.E.E. Kretzschmar (Mirjam)

    2017-01-01

    textabstractThe burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this

  3. Growth and clinical variables in nitrogen-restricted piglets fed an adjusted essential amino acid mix: Effects using partially intact protein-based diets

    Science.gov (United States)

    Current recommendations for protein levels in infant formula ensure that growth matches or exceeds growth of breast-fed infants, but may provide a surplus of amino acids (AA). Recent studies in infants using AA-based formulas support specific adjustment of the essential AA (EAA) composition allowing...

  4. Model-based security testing

    OpenAIRE

    Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...

  5. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  6. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  7. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  8. Frequency band adjustment match filtering based on variable frequency GPR antennas pairing scheme for shallow subsurface investigations

    Science.gov (United States)

    Shaikh, Shahid Ali; Tian, Gang; Shi, Zhanjie; Zhao, Wenke; Junejo, S. A.

    2018-02-01

    Ground penetrating Radar (GPR) is an efficient tool for subsurface geophysical investigations, particularly at shallow depths. The non-destructiveness, cost efficiency, and data reliability are the important factors that make it an ideal tool for the shallow subsurface investigations. Present study encompasses; variations in central frequency of transmitting and receiving GPR antennas (Tx-Rx) have been analyzed and frequency band adjustment match filters are fabricated and tested accordingly. Normally, the frequency of both the antennas remains similar to each other whereas in this study we have experimentally changed the frequencies of Tx-Rx and deduce the response. Instead of normally adopted three pairs, a total of nine Tx-Rx pairs were made from 50 MHz, 100 MHz, and 200 MHz antennas. The experimental data was acquired at the designated near surface geophysics test site of the Zhejiang University, Hangzhou, China. After the impulse response analysis of acquired data through conventional as well as varied Tx-Rx pairs, different swap effects were observed. The frequency band and exploration depth are influenced by transmitting frequencies rather than the receiving frequencies. The impact of receiving frequencies was noticed on the resolution; the more noises were observed using the combination of high frequency transmitting with respect to low frequency receiving. On the basis of above said variable results we have fabricated two frequency band adjustment match filters, the constant frequency transmitting (CFT) and the variable frequency transmitting (VFT) frequency band adjustment match filters. By the principle, the lower and higher frequency components were matched and then incorporated with intermediate one. Therefore, this study reveals that a Tx-Rx combination of low frequency transmitting with high frequency receiving is a better choice. Moreover, both the filters provide better radargram than raw one, the result of VFT frequency band adjustment filter is

  9. Individual fluorouracil dose adjustment in FOLFOX based on pharmacokinetic follow-up compared with conventional body-area-surface dosing: a phase II, proof-of-concept study.

    Science.gov (United States)

    Capitain, Olivier; Asevoaia, Andreaa; Boisdron-Celle, Michele; Poirier, Anne-Lise; Morel, Alain; Gamelin, Erick

    2012-12-01

    To compare the efficacy and safety of pharmacokinetically (PK) guided fluorouracil (5-FU) dose adjustment vs. standard body-surface-area (BSA) dosing in a FOLFOX (folinic acid, fluorouracil, oxaliplatin) regimen in metastatic colorectal cancer (mCRC). A total of 118 patients with mCRC were administered individually determined PK-adjusted 5-FU in first-line FOLFOX chemotherapy. The comparison arm consisted of 39 patients, and these patients were also treated with FOLFOX with 5-FU by BSA. For the PK-adjusted arm 5-FU was monitored during infusion, and the dose for the next cycle was based on a dose-adjustment chart to achieve a therapeutic area under curve range (5-FU(ODPM Protocol)). The objective response rate was 69.7% in the PK-adjusted arm, and median overall survival and median progression-free survival were 28 and 16 months, respectively. In the traditional patients who received BSA dosage, objective response rate was 46%, and overall survival and progression-free survival were 22 and 10 months, respectively. Grade 3/4 toxicity was 1.7% for diarrhea, 0.8% for mucositis, and 18% for neutropenia in the dose-monitored group; they were 12%, 15%, and 25%, respectively, in the BSA group. Efficacy and tolerability of PK-adjusted FOLFOX dosing was much higher than traditional BSA dosing in agreement with previous reports for 5-FU monotherapy PK-adjusted dosing. Analysis of these results suggests that PK-guided 5-FU therapy offers added value to combination therapy for mCRC. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Parental Perceptions of Family Adjustment in Childhood Developmental Disabilities

    Science.gov (United States)

    Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry

    2013-01-01

    Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…

  11. Pricing Mining Concessions Based on Combined Multinomial Pricing Model

    Directory of Open Access Journals (Sweden)

    Chang Xiao

    2017-01-01

    Full Text Available A combined multinomial pricing model is proposed for pricing mining concession in which the annualized volatility of the price of mineral products follows a multinomial distribution. First, a combined multinomial pricing model is proposed which consists of binomial pricing models calculated according to different volatility values. Second, a method is provided to calculate the annualized volatility and the distribution. Third, the value of convenience yields is calculated based on the relationship between the futures price and the spot price. The notion of convenience yields is used to adjust our model as well. Based on an empirical study of a Chinese copper mine concession, we verify that our model is easy to use and better than the model with constant volatility when considering the changing annualized volatility of the price of the mineral product.

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. Lithosphere and upper-mantle structure of the southern Baltic Sea estimated from modelling relative sea-level data with glacial isostatic adjustment

    Science.gov (United States)

    Steffen, H.; Kaufmann, G.; Lampe, R.

    2014-06-01

    During the last glacial maximum, a large ice sheet covered Scandinavia, which depressed the earth's surface by several 100 m. In northern central Europe, mass redistribution in the upper mantle led to the development of a peripheral bulge. It has been subsiding since the begin of deglaciation due to the viscoelastic behaviour of the mantle. We analyse relative sea-level (RSL) data of southern Sweden, Denmark, Germany, Poland and Lithuania to determine the lithospheric thickness and radial mantle viscosity structure for distinct regional RSL subsets. We load a 1-D Maxwell-viscoelastic earth model with a global ice-load history model of the last glaciation. We test two commonly used ice histories, RSES from the Australian National University and ICE-5G from the University of Toronto. Our results indicate that the lithospheric thickness varies, depending on the ice model used, between 60 and 160 km. The lowest values are found in the Oslo Graben area and the western German Baltic Sea coast. In between, thickness increases by at least 30 km tracing the Ringkøbing-Fyn High. In Poland and Lithuania, lithospheric thickness reaches up to 160 km. However, the latter values are not well constrained as the confidence regions are large. Upper-mantle viscosity is found to bracket [2-7] × 1020 Pa s when using ICE-5G. Employing RSES much higher values of 2 × 1021 Pa s are obtained for the southern Baltic Sea. Further investigations should evaluate whether this ice-model version and/or the RSL data need revision. We confirm that the lower-mantle viscosity in Fennoscandia can only be poorly resolved. The lithospheric structure inferred from RSES partly supports structural features of regional and global lithosphere models based on thermal or seismological data. While there is agreement in eastern Europe and southwest Sweden, the structure in an area from south of Norway to northern Germany shows large discrepancies for two of the tested lithosphere models. The lithospheric

  14. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  15. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  16. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  17. A Software Tool for Estimation of Burden of Infectious Diseases in Europe Using Incidence-Based Disability Adjusted Life Years

    OpenAIRE

    Colzani, Edoardo; Cassini, Alessandro; Lewandowski, Daniel; Mangen, Marie-Josee J.; Plass, Dietrich; McDonald, Scott A.; van Lier, Alies; Haagsma, Juanita A.; Maringhini, Guido; Pini, Alessandro; Kramarz, Piotr; Kretzschmar, Mirjam E.

    2017-01-01

    textabstractThe burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this methodology poses a challenge. The aim of the Burden of Communicable Disease in Europe (BCoDE) project is to summarize the impact of communicable disease in the European Union and European Economic Ar...

  18. A high PSRR Class-D audio amplifier IC based on a self-adjusting voltage reference

    OpenAIRE

    Huffenus , Alexandre; Pillonnet , Gaël; Abouchi , Nacer; Goutti , Frédéric; Rabary , Vincent; Cittadini , Robert

    2010-01-01

    International audience; In a wide range of applications, audio amplifiers require a large Power Supply Rejection Ratio (PSRR) that the current Class-D architecture cannot reach. This paper proposes a self-adjusting internal voltage reference scheme that sets the bias voltages of the amplifier without losing on output dynamics. This solution relaxes the constraints on gain and feedback resistors matching that were previously the limiting factor for the PSRR. Theory of operation, design and IC ...

  19. Do Substance Use, Psychosocial Adjustment, and Sexual Experiences Vary for Dating Violence Victims Based on Type of Violent Relationships?

    Science.gov (United States)

    Zweig, Janine M; Yahner, Jennifer; Dank, Meredith; Lachman, Pamela

    2016-12-01

    We examined whether substance use, psychosocial adjustment, and sexual experiences vary for teen dating violence victims by the type of violence in their relationships. We compared dating youth who reported no victimization in their relationships to those who reported being victims of intimate terrorism (dating violence involving one physically violent and controlling perpetrator) and those who reported experiencing situational couple violence (physical dating violence absent the dynamics of power and control). This was a cross-sectional survey of 3745 dating youth from 10 middle and high schools in the northeastern United States, one third of whom reported physical dating violence. In general, teens experiencing no dating violence reported less frequent substance use, higher psychosocial adjustment, and less sexual activity than victims of either intimate terrorism or situational couple violence. In addition, victims of intimate terrorism reported higher levels of depression, anxiety, and anger/hostility compared to situational couple violence victims; they also were more likely to report having sex, and earlier sexual initiation. Youth who experienced physical violence in their dating relationships, coupled with controlling behaviors from their partner/perpetrator, reported the most psychosocial adjustment issues and the earliest sexual activity. © 2016, American School Health Association.

  20. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    Science.gov (United States)

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.

  1. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-07-01

    Full Text Available Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D convex hull and spanning tree (NDACS for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  2. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2006-08-01

    Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.

  3. The model of children's social adjustment under the gender-roles absence in single-parent families.

    Science.gov (United States)

    Chen, I-Jun; Zhang, Hailun; Wei, Bingsi; Guo, Zeyao

    2018-01-14

    This study aimed to evaluate the effects of the gender-role types and child-rearing gender-role attitude of the single-parents, as well as their children's gender role traits and family socio-economic status, on social adjustment. We recruited 458 pairs of single parents and their children aged 8-18 by purposive sampling. The research tools included the Family Socio-economic Status Questionnaire, Sex Role Scales, Parental Child-rearing Gender-role Attitude Scale and Social Adjustment Scale. The results indicated: (a) single mothers' and their daughters' feminine traits were both higher than their masculine traits, and sons' masculine traits were higher than their feminine traits; the majority gender-role type of single parents and their children was androgyny; significant differences were found between children's gender-role types depending on different raiser, the proportion of girls' masculine traits raised by single fathers was significantly higher than those who were raised by single mothers; (b) family socio-economic status and single parents' gender-role types positively influenced parental child-rearing gender-role attitude, which in turn, influenced the children's gender traits, and further affected children's social adjustment. © 2018 International Union of Psychological Science.

  4. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    Science.gov (United States)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  5. Effectiveness of Forgiveness Therapy Based on Islamic Viewpoint on Marital Adjustment and Tendency to Forgive in the Women Afflicted by Infidelity

    Directory of Open Access Journals (Sweden)

    Fariba Kiani

    2016-12-01

    Full Text Available Background and Objectives: Orienting its approach based on Islamic viewpoint (Quran and Hadith, this study aimed to investigate effectiveness of forgiveness therapy, inclination to forgive, and marital adjustment in affected women who referred to counseling centers in Tehran. Methods: This study was a semi-experimental research which made use of test-retest methodology and control group. Statistical population contained women who had suffered unfaithfulness by their husbands and referred to counseling centers of Tehran in the summer 2015. A number of 30 samples were selected in purposive and convenience manners and were categorized in two test and control groups. After conduction of pretest, which applied Spanier Dyadic Adjustment Scale and Amnesty Ray et al., members of the test group attended in nine 90-minute weekly sessions of forgiveness therapy based on Islamic viewpoint. Finally, posttest was conducted on both groups with the same tool. Results: Results of multivariable analysis of covariance showed that forgiveness therapy based on Islamic viewpoint has a significant effect on increased levels of willingness to forgive and marital adjustment in women. Conclusion: The results showed that forgiveness therapy based on Islamic viewpoints could be applied in designation of therapeutic interventions.

  6. Issues in practical model-based diagnosis

    NARCIS (Netherlands)

    Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.

    1993-01-01

    The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of

  7. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1990-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  8. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  9. A complete generalized adjustment criterion

    NARCIS (Netherlands)

    Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.

    2015-01-01

    Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent

  10. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I

    Directory of Open Access Journals (Sweden)

    Sit B.

    2009-08-01

    Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.

  11. Suitability of a three-dimensional model to measure empathy and its relationship with social and normative adjustment in Spanish adolescents: a cross-sectional study.

    Science.gov (United States)

    Herrera-López, Mauricio; Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario; Jolliffe, Darrick; Romera, Eva M

    2017-09-25

    (1) To examine the psychometric properties of the Basic Empathy Scale (BES) with Spanish adolescents, comparing a two and a three-dimensional structure;(2) To analyse the relationship between the three-dimensional empathy and social and normative adjustment in school. Transversal and ex post facto retrospective study. Confirmatory factorial analysis, multifactorial invariance analysis and structural equations models were used. 747 students (51.3% girls) from Cordoba, Spain, aged 12-17 years (M=13.8; SD=1.21). The original two-dimensional structure was confirmed (cognitive empathy, affective empathy), but a three-dimensional structure showed better psychometric properties, highlighting the good fit found in confirmatory factorial analysis and adequate internal consistent valued, measured with Cronbach's alpha and McDonald's omega. Composite reliability and average variance extracted showed better indices for a three-factor model. The research also showed evidence of measurement invariance across gender. All the factors of the final three-dimensional BES model were direct and significantly associated with social and normative adjustment, being most strongly related to cognitive empathy. This research supports the advances in neuroscience, developmental psychology and psychopathology through a three-dimensional version of the BES, which represents an improvement in the original two-factorial model. The organisation of empathy in three factors benefits the understanding of social and normative adjustment in adolescents, in which emotional disengagement favours adjusted peer relationships. Psychoeducational interventions aimed at improving the quality of social life in schools should target these components of empathy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. An Evaluation of the Adjusted DeLone and McLean Model of Information Systems Success; the case of financial information system in Ferdowsi University of Mashhad

    OpenAIRE

    Mohammad Lagzian; Shamsoddin Nazemi; Fatemeh Dadmand

    2012-01-01

    Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS) in an Iranian Univ...

  13. The Impact of AUC-Based Monitoring on Pharmacist-Directed Vancomycin Dose Adjustments in Complicated Methicillin-Resistant Staphylococcus aureus Infection.

    Science.gov (United States)

    Stoessel, Andrew M; Hale, Cory M; Seabury, Robert W; Miller, Christopher D; Steele, Jeffrey M

    2018-01-01

    This study aimed to assess the impact of area under the curve (AUC)-based vancomycin monitoring on pharmacist-initiated dose adjustments after transitioning from a trough-only to an AUC-based monitoring method at our institution. A retrospective cohort study of patients treated with vancomycin for complicated methicillin-resistant Staphylococcus aureus (MRSA) infection between November 2013 and December 2016 was conducted. The frequency of pharmacist-initiated dose adjustments was assessed for patients monitored via trough-only and AUC-based approaches for trough ranges: 10 to 14.9 mg/L and 15 to 20 mg/L. Fifty patients were included: 36 in the trough-based monitoring and 14 in the AUC-based-monitoring group. The vancomycin dose was increased in 71.4% of patients when troughs were 10 to 14.9 mg/L when a trough-only approach was used and in only 25% of patients when using AUC estimation ( P = .048). In the AUC group, the dose was increased only when AUC/minimum inhibitory concentration (MIC) AUC/MIC ≥400. The AUC-based monitoring did not significantly increase the frequency of dose reductions when trough concentrations were 15 to 20 mg/L (AUC: 33.3% vs trough: 4.6%; P = .107). The AUC-based monitoring resulted in fewer patients with dose adjustments when trough levels were 10 to 14.9 mg/L. The AUC-based monitoring has the potential to reduce unnecessary vancomycin exposure and warrants further investigation.

  14. Sensor-based interior modeling

    International Nuclear Information System (INIS)

    Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.

    1995-01-01

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation

  15. Comparison between two photovoltaic module models based on transistors

    Science.gov (United States)

    Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel

    2018-05-01

    The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.

  16. Differential Geometry Based Multiscale Models

    Science.gov (United States)

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that

  17. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  18. Burden of typhoid fever in low-income and middle-income countries: a systematic, literature-based update with risk-factor adjustment.

    Science.gov (United States)

    Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F

    2014-10-01

    No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  19. Positioning and number of nutritional levels in dose-response trials to estimate the optimal-level and the adjustment of the models

    Directory of Open Access Journals (Sweden)

    Fernando Augusto de Souza

    2014-07-01

    Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.

  20. Development of a three dimensional homogeneous calculation model for the BFS-62 critical experiment. Preparation of adjusted equivalent measured values for sodium void reactivity values. Final report

    International Nuclear Information System (INIS)

    Manturov, G.; Semenov, M.; Seregin, A.; Lykova, L.

    2004-01-01

    The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system

  1. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  2. A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle Joint of a Bipedal Robot: Modeling & Simulation

    Directory of Open Access Journals (Sweden)

    Aiman Omer

    2015-12-01

    Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.

  3. The Effects of School-Based Maum Meditation Program on the Self-Esteem and School Adjustment in Primary School Students

    Science.gov (United States)

    Yoo, Yang Gyeong; Lee, In Soo

    2013-01-01

    Self-esteem and school adjustment of children in the lower grades of primary school, the beginning stage of school life, have a close relationship with development of personality, mental health and characters of children. Therefore, the present study aimed to verify the effect of school-based Maum Meditation program on children in the lower grades of primary school, as a personality education program. The result showed that the experimental group with application of Maum Meditation program had significant improvements in self-esteem and school adjustment, compared to the control group without the application. In conclusion, since the study provides significant evidence that the intervention of Maum Meditation program had positive effects on self-esteem and school adjustment of children in the early stage of primary school, it is suggested to actively employ Maum Meditation as a school-based meditation program for mental health promotion of children in the early school ages, the stage of formation of personalities and habits. PMID:23777717

  4. The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market

    DEFF Research Database (Denmark)

    Andersen, Signe Hald; Hansen, Lars Gårn

    Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...

  5. Adolescents of the U.S. National Longitudinal Lesbian Family Study: male role models, gender role traits, and psychological adjustment

    NARCIS (Netherlands)

    Bos, H.; Goldberg, N.; van Gelderen, L.; Gartrell, N.

    2012-01-01

    This article focuses on the influence of male role models on the lives of adolescents (N = 78) in the U.S. National Longitudinal Lesbian Family Study. Half of the adolescents had male role models; those with and those without male role models had similar scores on the feminine and masculine scales

  6. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  7. Prostate cancer mortality reduction by prostate-specific antigen-based screening adjusted for nonattendance and contamination in the European Randomised Study of Screening for Prostate Cancer (ERSPC).

    Science.gov (United States)

    Roobol, Monique J; Kerkhof, Melissa; Schröder, Fritz H; Cuzick, Jack; Sasieni, Peter; Hakama, Matti; Stenman, Ulf Hakan; Ciatto, Stefano; Nelen, Vera; Kwiatkowski, Maciej; Lujan, Marcos; Lilja, Hans; Zappa, Marco; Denis, Louis; Recker, Franz; Berenguer, Antonio; Ruutu, Mirja; Kujala, Paula; Bangma, Chris H; Aus, Gunnar; Tammela, Teuvo L J; Villers, Arnauld; Rebillard, Xavier; Moss, Sue M; de Koning, Harry J; Hugosson, Jonas; Auvinen, Anssi

    2009-10-01

    Prostate-specific antigen (PSA) based screening for prostate cancer (PCa) has been shown to reduce prostate specific mortality by 20% in an intention to screen (ITS) analysis in a randomised trial (European Randomised Study of Screening for Prostate Cancer [ERSPC]). This effect may be diluted by nonattendance in men randomised to the screening arm and contamination in men randomised to the control arm. To assess the magnitude of the PCa-specific mortality reduction after adjustment for nonattendance and contamination. We analysed the occurrence of PCa deaths during an average follow-up of 9 yr in 162,243 men 55-69 yr of age randomised in seven participating centres of the ERSPC. Centres were also grouped according to the type of randomisation (ie, before or after informed written consent). Nonattendance was defined as nonattending the initial screening round in ERSPC. The estimate of contamination was based on PSA use in controls in ERSPC Rotterdam. Relative risks (RRs) with 95% confidence intervals (CIs) were compared between an ITS analysis and analyses adjusting for nonattendance and contamination using a statistical method developed for this purpose. In the ITS analysis, the RR of PCa death in men allocated to the intervention arm relative to the control arm was 0.80 (95% CI, 0.68-0.96). Adjustment for nonattendance resulted in a RR of 0.73 (95% CI, 0.58-0.93), and additional adjustment for contamination using two different estimates led to estimated reductions of 0.69 (95% CI, 0.51-0.92) to 0.71 (95% CI, 0.55-0.93), respectively. Contamination data were obtained through extrapolation of single-centre data. No heterogeneity was found between the groups of centres. PSA screening reduces the risk of dying of PCa by up to 31% in men actually screened. This benefit should be weighed against a degree of overdiagnosis and overtreatment inherent in PCa screening.

  8. Teasing and social rejection among obese children enrolling in family-based behavioural treatment: effects on psychological adjustment and academic competencies.

    Science.gov (United States)

    Gunnarsdottir, T; Njardvik, U; Olafsdottir, A S; Craighead, L W; Bjarnason, R

    2012-01-01

    The first objective was to determine the prevalence of psychological maladjustment (emotional and behavioural problems), low academic competencies and teasing/social rejection among obese Icelandic children enrolling in a family-based behavioural treatment. A second objective was to explore the degree to which teasing/social rejection specifically contributes to children's psychological adjustment and academic competencies when controlling for other variables, including demographics, children's physical activity, parental depression and life-stress. Participants were 84 obese children (mean body mass index-standard deviation score=3.11, age range=7.52-13.61 years). Height and weight, demographics and measures of children's psychological adjustment, academic competencies, teasing/social rejection and physical activity were collected from children, parents and teachers. Parental depression and life-stress was self-reported. Over half the children exceeded cutoffs indicating concern on at least one measure of behavioural or emotional difficulties. Children endorsed significant levels of teasing/social rejection, with almost half acknowledging they were not popular with same-gender peers. Parent reports of peer problems were even higher, with over 90% of both boys and girls being rated by their parents as having significant peer difficulties. However, rates of low academic competencies as reported by teachers were not different from those of the general population. In regression analyses controlling for other variables, self-reported teasing/social rejection emerged as a significant contributor to explaining both child psychological adjustment and academic competencies. The results indicate that among obese children enrolled in family-based treatment, self-reported teasing/social rejection is quite high and it is associated with poorer psychological adjustment as well as lower academic competencies. Parent reports corroborate the presence of substantial peer

  9. A comparison of two sleep spindle detection methods based on all night averages: individually adjusted versus fixed frequencies

    Directory of Open Access Journals (Sweden)

    Péter Przemyslaw Ujma

    2015-02-01

    Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.

  10. A Robust and Fast Method to Compute Shallow States without Adjustable Parameters: Simulations for a Silicon-Based Qubit

    Science.gov (United States)

    Debernardi, Alberto; Fanciulli, Marco

    Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.

  11. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  12. [Effects in the adherence treatment and psychological adjustment after the disclosure of HIV/AIDS diagnosis with the "DIRE" clinical model in Colombian children under 17].

    Science.gov (United States)

    Trejos, Ana María; Reyes, Lizeth; Bahamon, Marly Johana; Alarcón, Yolima; Gaviria, Gladys

    2015-08-01

    A study in five Colombian cities in 2006, confirms the findings of other international studies: the majority of HIV-positive children not know their diagnosis, caregivers are reluctant to give this information because they believe that the news will cause emotional distress to the child becoming primary purpose of this study to validate a model of revelation. We implemented a clinical model, referred to as: "DIRE" that hypothetically had normalizing effects on psychological adjustment and adherence to antiretroviral treatment of HIV seropositive children, using a quasi-experimental design. Test were administered (questionnaire to assess patterns of disclosure and non-disclosure of the diagnosis of VIH/SIDA on children in health professionals and participants caregivers, Family Apgar, EuroQol EQ- 5D, MOS Social Support Survey Questionnaire Information treatment for VIH/SIDA and child Symptom Checklist CBCL/6-18 adapted to Latinos) before and after implementation of the model to 31 children (n: 31), 30 caregivers (n: 30) and 41 health professionals. Data processing was performed using the Statistical Package for the Social Science version 21 by applying parametric tests (Friedman) and nonparametric (t Student). No significant differences in adherence to treatment (p=0.392), in the psychological adjustment were found positive significant differences at follow-ups compared to baseline 2 weeks (p: 0.001), 3 months (p: 0.000) and 6 months (p: 0.000). The clinical model demonstrated effectiveness in normalizing of psychological adjustment and maintaining treatment compliance. The process also generated confidence in caregivers and health professionals in this difficult task.

  13. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  14. Differences among skeletal muscle mass indices derived from height-, weight-, and body mass index-adjusted models in assessing sarcopenia

    Science.gov (United States)

    Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo

    2016-01-01

    Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective. PMID:27334763

  15. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    Science.gov (United States)

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  16. Validation of the internalization of the Model Minority Myth Measure (IM-4) and its link to academic performance and psychological adjustment among Asian American adolescents.

    Science.gov (United States)

    Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy

    2015-04-01

    There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes. (c) 2015 APA, all rights reserved).

  17. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  18. Tropical cyclone losses in the USA and the impact of climate change - A trend analysis based on data from a new approach to adjusting storm losses

    International Nuclear Information System (INIS)

    Schmidt, Silvio; Kemfert, Claudia; Hoeppe, Peter

    2009-01-01

    Economic losses caused by tropical cyclones have increased dramatically. Historical changes in losses are a result of meteorological factors (changes in the incidence of severe cyclones, whether due to natural climate variability or as a result of human activity) and socio-economic factors (increased prosperity and a greater tendency for people to settle in exposed areas). This paper aims to isolate the socio-economic effects and ascertain the potential impact of climate change on this trend. Storm losses for the period 1950-2005 have been adjusted to the value of capital stock in 2005 so that any remaining trend cannot be ascribed to socio-economic developments. For this, we introduce a new approach to adjusting losses based on the change in capital stock at risk. Storm losses are mainly determined by the intensity of the storm and the material assets, such as property and infrastructure, located in the region affected. We therefore adjust the losses to exclude increases in the capital stock of the affected region. No trend is found for the period 1950-2005 as a whole. In the period 1971-2005, since the beginning of a trend towards increased intense cyclone activity, losses excluding socio-economic effects show an annual increase of 4% per annum. This increase must therefore be at least due to the impact of natural climate variability but, more likely than not, also due to anthropogenic forcings.

  19. A Neuron Model Based Ultralow Current Sensor System for Bioapplications

    Directory of Open Access Journals (Sweden)

    A. K. M. Arifuzzman

    2016-01-01

    Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.

  20. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    Energy Technology Data Exchange (ETDEWEB)

    Sun, W; Jiang, M; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results

  1. Three-factor model of premorbid adjustment in a sample with chronic schizophrenia and first-episode psychosis.

    Science.gov (United States)

    Barajas, Ana; Usall, Judith; Baños, Iris; Dolz, Montserrat; Villalta-Gil, Victoria; Vilaplana, Miriam; Autonell, Jaume; Sánchez, Bernardo; Cervilla, Jorge A; Foix, Alexandrina; Obiols, Jordi E; Haro, Josep Maria; Ochoa, Susana

    2013-12-01

    The dimensionality of premorbid adjustment (PA) has been a debated issue, with attempts to determine whether PA is a unitary construct or composed of several independent domains characterized by a differential deterioration pattern and specific outcome correlates. This study examines the factorial structure of PA, as well as, the course and correlates of its domains. Retrospective study of 84 adult patients experiencing first-episode psychosis (FEP) (n=33) and individuals with schizophrenia (SCH) (n=51). All patients were evaluated with a comprehensive battery of instruments including clinical, functioning and neuropsychological variables. A principal component analysis accompanied by a varimax rotation method was used to examine the factor structure of the PAS-S scale. Paired t tests and Wilcoxon rank tests were used to assess the changes in PAS domains over time. Bivariate correlation analyses were performed to analyse the relationship between PAS factors and clinical, social and cognitive variables. PA was better explained by three factors (71.65% of the variance): Academic PA, Social PA and Socio-sexual PA. The academic domain showed higher scores of PA from childhood. Social and clinical variables were more strongly related to Social PA and Socio-sexual PA domains, and the Academic PA domain was exclusively associated with cognitive variables. This study supports previous evidence, emphasizing the validity of dividing PA into its sub-components. A differential deterioration pattern and specific correlates were observed in each PA domains, suggesting that impairments in each PA domain might predispose individuals to develop different expressions of psychotic dimensions. © 2013.

  2. THE TECTONICS STRESS AND STRAIN FIELD MODELING ADJUSTED FOR EVOLUTION OF GEOLOGICAL STRUCTURES (SAILAG INTRUSION, EASTERN SAYAN

    Directory of Open Access Journals (Sweden)

    V. N. Voytenko

    2013-01-01

    Full Text Available The article describes a tectonophysical model showing evolution of structures in the Sailag granodiorite massif in relation to its gold-bearing capacity. The model takes into account the load patterns according to geological data, accumulated deformation, and gravity stresses. This model provides for updating the structural-geological model showing development of the intrusion body and the ore field. Forecasted are destruction patterns in the apical and above-dome parts of the massif  in the intrusion and contraction phase, formation of the long-term shear zone at the steeply dipping slope of the intrusion body, and subvertical fractures associated with the long-term shear zone and vertical mechanical ‘layering’ of the intrusive body.  

  3. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh; Genton, Marc G.

    2014-01-01

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte

  4. On performing of interference technique based on self-adjusting Zernike filters (SA-AVT method) to investigate flows and validate 3D flow numerical simulations

    Science.gov (United States)

    Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.

    2017-10-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  5. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers

    Energy Technology Data Exchange (ETDEWEB)

    Treuer, H. [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany). E-mail: h.treuer at uni-koeln.de; Hoevels, M.; Luyken, K.; Gierich, A.; Sturm, V. [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Kocher, M.; Mueller, R.-P. [Department of Radiotherapy, University of Cologne, Cologne (Germany)

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems. (author)

  6. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers

    International Nuclear Information System (INIS)

    Treuer, H.; Kocher, M.; Mueller, R.-P.

    2000-01-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems. (author)

  7. Model-based DSL frameworks

    NARCIS (Netherlands)

    Ivanov, Ivan; Bézivin, J.; Jouault, F.; Valduriez, P.

    2006-01-01

    More than five years ago, the OMG proposed the Model Driven Architecture (MDA™) approach to deal with the separation of platform dependent and independent aspects in information systems. Since then, the initial idea of MDA evolved and Model Driven Engineering (MDE) is being increasingly promoted to

  8. A longitudinal examination of the Adaptation to Poverty-Related Stress Model: predicting child and adolescent adjustment over time.

    Science.gov (United States)

    Wadsworth, Martha E; Rindlaub, Laura; Hurwich-Reiss, Eliana; Rienks, Shauna; Bianco, Hannah; Markman, Howard J

    2013-01-01

    This study tests key tenets of the Adaptation to Poverty-related Stress Model. This model (Wadsworth, Raviv, Santiago, & Etter, 2011 ) builds on Conger and Elder's family stress model by proposing that primary control coping and secondary control coping can help reduce the negative effects of economic strain on parental behaviors central to the family stress model, namely, parental depressive symptoms and parent-child interactions, which together can decrease child internalizing and externalizing problems. Two hundred seventy-five co-parenting couples with children between the ages of 1 and 18 participated in an evaluation of a brief family strengthening intervention, aimed at preventing economic strain's negative cascade of influence on parents, and ultimately their children. The longitudinal path model, analyzed at the couple dyad level with mothers and fathers nested within couple, showed very good fit, and was not moderated by child gender or ethnicity. Analyses revealed direct positive effects of primary control coping and secondary control coping on mothers' and fathers' depressive symptoms. Decreased economic strain predicted more positive father-child interactions, whereas increased secondary control coping predicted less negative mother-child interactions. Positive parent-child interactions, along with decreased parent depression and economic strain, predicted child internalizing and externalizing over the course of 18 months. Multiple-group models analyzed separately by parent gender revealed, however, that child age moderated father effects. Findings provide support for the adaptation to poverty-related stress model and suggest that prevention and clinical interventions for families affected by poverty-related stress may be strengthened by including modules that address economic strain and efficacious strategies for coping with strain.

  9. The effect of adjusting model inputs to achieve mass balance on time-dynamic simulations in a food-web model of Lake Huron

    Science.gov (United States)

    Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.

    2014-01-01

    Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low

  10. Red list of Czech spiders: 3rd edition, adjusted according to evidence-based national conservation priorities

    Czech Academy of Sciences Publication Activity Database

    Řezáč, M.; Kůrka, A.; Růžička, Vlastimil; Heneberg, P.

    2015-01-01

    Roč. 70, č. 5 (2015), s. 645-666 ISSN 0006-3088 Grant - others:MZe(CZ) RO0415 Institutional support: RVO:60077344 Keywords : evidence-based conservation * extinction risk * invertebrate surveys Subject RIV: EG - Zoology Impact factor: 0.719, year: 2015

  11. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  12. EPR-based material modelling of soils

    Science.gov (United States)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  13. Normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in a specific ethnicity.

    Directory of Open Access Journals (Sweden)

    Ounjai Kor-Anantakul

    Full Text Available To establish normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in southern Thai women, and to compare these reference levels with Caucasian-specific and northern Thai models.A cross-sectional study was conducted in 1,150 normal singleton pregnancy women to determine serum pregnancy-associated plasma protein-A (PAPP-A and free β-human chorionic gonadotropin (β-hCG concentrations in women from southern Thailand. The predicted median values were compared with published equations for Caucasians and northern Thai women.The best-fitting regression equations for the expected median serum levels of PAPP-A (mIU/L and free β- hCG (ng/mL according to maternal weight (Wt in kg and gestational age (GA in days were: [Formula: see text] and [Formula: see text] Both equations were selected with a statistically significant contribution (p< 0.05. Compared with the Caucasian model, the median values of PAPP-A were higher and the median values of free β-hCG were lower in the southern Thai women. And compared with the northern Thai models, the median values of both biomarkers were lower in southern Thai women.The study has successfully developed maternal-weight- and gestational-age-adjusted median normative models to convert the PAPP-A and free β-hCG levels into their Multiple of Median equivalents in southern Thai women. These models confirmed ethnic differences.

  14. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  15. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  16. Models for Rational Number Bases

    Science.gov (United States)

    Pedersen, Jean J.; Armbruster, Frank O.

    1975-01-01

    This article extends number bases to negative integers, then to positive rationals and finally to negative rationals. Methods and rules for operations in positive and negative rational bases greater than one or less than negative one are summarized in tables. Sample problems are explained and illustrated. (KM)

  17. Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review

    DEFF Research Database (Denmark)

    Kolovos, Spyros; Bosmans, Judith E.; Riper, Heleen

    2017-01-01

    eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined......, and DES models in seven.ConclusionThere were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation...

  18. A Multifactorial Intervention Based on the NICE-Adjusted Guideline in the Prevention of Delirium in Patients Hospitalized for Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Cheraghi

    2017-05-01

    Full Text Available Delirium is the most common problem in patients in intensive care units. Prevention of delirium is more important than treatment. The aim of this study is to determine the effect of the NICE-adjusted multifactorial intervention to prevent delirium in open heart surgery patients. Methods: This study is a quasi-experimental study on 88 patients (In each group, 44 patients undergoing open heart surgery in the intensive care unit of Imam Khomeini Hospital, Tehran. Subjects received usual care group, only the incidence of delirium were studied. So that patients in the two groups of second to fifth postoperative day, twice a day by the researcher, and CAM-ICU questionnaire were followed. After completion of the sampling in this group, in the intervention group also examined incidence of delirium was conducted in the same manner except that multifactorial interventions based on the intervention of NICE modified by the researcher on the second day to fifth implementation and intervention on each turn, their implementation was followed. As well as to check the quality of sleep and pain in the intervention group of CPOT and Pittsburgh Sleep assessment tools were used. Data analysis was done using the SPSS software, version 16. A T-test, a chi-square test, and a Fisher’s exact test were also carried out. Results: The incidence of delirium in the control group was 42.5%; and in the intervention group, it was 22.5%. The result showed the incidence of delirium in open heart surgery hospitalized patients after multifactorial intervention based on adjusted NICE guidelines has been significantly reduced. Conclusion: The NICE-adjusted multifactorial intervention guidelines for the prevention of delirium in cardiac surgery patients significantly reduced the incidence of delirium in these patients. So, using this method as an alternative comprehensive and reliable in preventing delirium in hospitalized patients in the ward heart surgery is recommended.

  19. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  20. School-Based Racial and Gender Discrimination among African American Adolescents: Exploring Gender Variation in Frequency and Implications for Adjustment

    OpenAIRE

    Cogburn, Courtney D.; Chavous, Tabbye M.; Griffin, Tiffany M.

    2011-01-01

    The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple reg...

  1. Theory of Work Adjustment Personality Constructs.

    Science.gov (United States)

    Lawson, Loralie

    1993-01-01

    To measure Theory of Work Adjustment personality and adjustment style dimensions, content-based scales were analyzed for homogeneity and successively reanalyzed for reliability improvement. Three sound scales were developed: inflexibility, activeness, and reactiveness. (SK)

  2. PV panel model based on datasheet values

    DEFF Research Database (Denmark)

    Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro

    2007-01-01

    This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell....... Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested....

  3. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  4. An Evaluation of the Adjusted DeLone and McLean Model of Information Systems Success; the case of financial information system in Ferdowsi University of Mashhad

    Directory of Open Access Journals (Sweden)

    Mohammad Lagzian

    2012-07-01

    Full Text Available Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS in an Iranian University. The relationships among the dimensions in an extended systems success measurement framework were tested. Data were collected by questionnaire from end-users of a financial information system at Ferdowsi University of Mashhad. The adjusted DeLone and McLean model was contained five variables (system quality, information quality, system use, user satisfaction, and individual impact. The results revealed that system quality was significant predictor of system use, user satisfaction and individual impact. Information quality was also a significant predictor of user satisfaction and individual impact, but not of system use. System use and user satisfaction were positively related to individual impact. The influence of user satisfaction on system use was insignificant

  5. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  6. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  7. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  9. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  10. Assessment of regression models for adjustment of iron status biomarkers for inflammation in children with moderate acute malnutrition in Burkina Faso

    DEFF Research Database (Denmark)

    Cichon, Bernardette; Ritz, Christian; Fabiansen, Christian

    2017-01-01

    BACKGROUND: Biomarkers of iron status are affected by inflammation. In order to interpret them in individuals with inflammation, the use of correction factors (CFs) has been proposed. OBJECTIVE: The objective of this study was to investigate the use of regression models as an alternative to the CF...... approach. METHODS: Morbidity data were collected during clinical examinations with morbidity recalls in a cross-sectional study in children aged 6-23 mo with moderate acute malnutrition. C-reactive protein (CRP), α1-acid glycoprotein (AGP), serum ferritin (SF), and soluble transferrin receptor (sTfR) were......TfR with the use of the best-performing model led to a 17% point increase and iron deficiency. CONCLUSION: Regression analysis is an alternative to adjust SF and may be preferable in research settings, because it can take morbidity and severity...

  11. Burden of Disease Measured by Disability-Adjusted Life Years and a Disease Forecasting Time Series Model of Scrub Typhus in Laiwu, China

    Science.gov (United States)

    Yang, Li-Ping; Liang, Si-Yuan; Wang, Xian-Jun; Li, Xiu-Jun; Wu, Yan-Ling; Ma, Wei

    2015-01-01

    Background Laiwu District is recognized as a hyper-endemic region for scrub typhus in Shandong Province, but the seriousness of this problem has been neglected in public health circles. Methodology/Principal Findings A disability-adjusted life years (DALYs) approach was adopted to measure the burden of scrub typhus in Laiwu, China during the period 2006 to 2012. A multiple seasonal autoregressive integrated moving average model (SARIMA) was used to identify the most suitable forecasting model for scrub typhus in Laiwu. Results showed that the disease burden of scrub typhus is increasing yearly in Laiwu, and which is higher in females than males. For both females and males, DALY rates were highest for the 60–69 age group. Of all the SARIMA models tested, the SARIMA(2,1,0)(0,1,0)12 model was the best fit for scrub typhus cases in Laiwu. Human infections occurred mainly in autumn with peaks in October. Conclusions/Significance Females, especially those of 60 to 69 years of age, were at highest risk of developing scrub typhus in Laiwu, China. The SARIMA (2,1,0)(0,1,0)12 model was the best fit forecasting model for scrub typhus in Laiwu, China. These data are useful for developing public health education and intervention programs to reduce disease. PMID:25569248

  12. The impact of resolution on the adjustment and decadal variability of the Atlantic meridional overturning circulation in a coupled climate model

    Energy Technology Data Exchange (ETDEWEB)

    Hodson, Daniel L.R.; Sutton, Rowan T. [University of Reading, NCAS-Climate, Department of Meteorology, Earley Gate, PO Box 243, Reading (United Kingdom)

    2012-12-15

    Variations in the Atlantic meridional overturning circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic sea surface temperatures to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models. (orig.)

  13. The Safety, Effectiveness and Concentrations of Adjusted Lopinavir/Ritonavir in HIV-Infected Adults on Rifampicin-Based Antitubercular Therapy

    Science.gov (United States)

    Decloedt, Eric H.; Maartens, Gary; Smith, Peter; Merry, Concepta; Bango, Funeka; McIlleron, Helen

    2012-01-01

    Objective Rifampicin co-administration dramatically reduces plasma lopinavir concentrations. Studies in healthy volunteers and HIV-infected patients showed that doubling the dose of lopinavir/ritonavir (LPV/r) or adding additional ritonavir offsets this interaction. However, high rates of hepatotoxicity were observed in healthy volunteers. We evaluated the safety, effectiveness and pre-dose concentrations of adjusted doses of LPV/r in HIV infected adults treated with rifampicin-based tuberculosis treatment. Methods Adult patients on a LPV/r-based antiretroviral regimen and rifampicin-based tuberculosis therapy were enrolled. Doubled doses of LPV/r or an additional 300 mg of ritonavir were used to overcome the inducing effect of rifampicin. Steady-state lopinavir pre-dose concentrations were evaluated every second month. Results 18 patients were enrolled with a total of 79 patient months of observation. 11/18 patients were followed up until tuberculosis treatment completion. During tuberculosis treatment, the median (IQR) pre-dose lopinavir concentration was 6.8 (1.1–9.2) mg/L and 36/47 (77%) were above the recommended trough concentration of 1 mg/L. Treatment was generally well tolerated with no grade 3 or 4 toxicity: 8 patients developed grade 1 or 2 transaminase elevation, 1 patient defaulted additional ritonavir due to nausea and 1 patient developed diarrhea requiring dose reduction. Viral loads after tuberculosis treatment were available for 11 patients and 10 were undetectable. Conclusion Once established on treatment, adjusted doses of LPV/r co-administered with rifampicin-based tuberculosis treatment were tolerated and LPV pre-dose concentrations were adequate. PMID:22412856

  14. Effect of Rocket (Eruca sativa Extract on MRSA Growth and Proteome: Metabolic Adjustments in Plant-Based Media

    Directory of Open Access Journals (Sweden)

    Agapi I. Doulgeraki

    2017-05-01

    Full Text Available The emergence of methicillin-resistant Staphylococcus aureus (MRSA in food has provoked a great concern about the presence of MRSA in associated foodstuff. Although MRSA is often detected in various retailed meat products, it seems that food handlers are more strongly associated with this type of food contamination. Thus, it can be easily postulated that any food could be contaminated with this pathogen in an industrial environment or in household and cause food poisoning. To this direction, the effect of rocket (Eruca sativa extract on MRSA growth and proteome was examined in the present study. This goal was achieved with the comparative study of the MRSA strain COL proteome, cultivated in rocket extract versus the standard Luria-Bertani growth medium. The obtained results showed that MRSA was able to grow in rocket extract. In addition, proteome analysis using 2-DE method showed that MRSA strain COL is taking advantage of the sugar-, lipid-, and vitamin-rich substrate in the liquid rocket extract, although its growth was delayed in rocket extract compared to Luria–Bertani medium. This work could initiate further research about bacterial metabolism in plant-based media and defense mechanisms against plant-derived antibacterials.

  15. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  16. The Effect of Overconfidence and Experience on Belief Adjustment Model in Investment Judgement (P.39-47

    Directory of Open Access Journals (Sweden)

    Luciana Spica Almilia

    2017-02-01

    Full Text Available This study examines the effect overconfidence and experience on increasing or reducing the information order effect in investment decision making. Subject criteria in this research are: professional investor (who having knowledge and experience in the field of investment and stock market and nonprofessional investor (who having knowledge in the field of investment and stock market. Based on the subject criteria, then subjects in this research include: accounting students, capital market and investor. This research is using experimental method of 2 x 2 (between subjects. The researcher in conducting this experimental research is using web based. The characteristic of individual (high confidence and low confidence is measured by calibration test. Independent variable used in this research consist of 2 active independent variables (manipulated which are as the followings: (1 Pattern of information presentation (step by step and end of sequence; and (2 Presentation order (good news – bad news or bad news – good news. Dependent variable in this research is a revision of investment decision done by research subject. Participants in this study were 78 nonprofessional investor and 48 professional investors. The research result is consistent with that predicted that individuals who have a high level of confidence that will tend to ignore the information available, the impact on individuals with a high level of confidence will be spared from the effects of the information sequence. Keywords: step by step, end of sequence, investment judgement, overconfidence, experimental method

  17. Annual Adjustment Factors

    Data.gov (United States)

    Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...

  18. Drought mitigation in perennial crops by fertilization and adjustments of regional yield models for future climate variability

    Science.gov (United States)

    Kantola, I. B.; Blanc-Betes, E.; Gomez-Casanovas, N.; Masters, M. D.; Bernacchi, C.; DeLucia, E. H.

    2017-12-01

    Increased variability and intensity of precipitation in the Midwest agricultural belt due to climate change is a major concern. The success of perennial bioenergy crops in replacing maize for bioethanol production is dependent on sustained yields that exceed maize, and the marketing of perennial crops often emphasizes the resilience of perennial agriculture to climate stressors. Land conversion from maize for bioethanol to Miscanthus x giganteus (miscanthus) increases yields and annual evapotranspiration rates (ET). However, establishment of miscanthus also increases biome water use efficiency (the ratio between net ecosystem productivity after harvest and ET), due to greater belowground biomass in miscanthus than in maize or soybean. In 2012, a widespread drought reduced the yield of 5-year-old miscanthus plots in central Illinois by 36% compared to the previous two years. Eddy covariance data indicated continued soil water deficit during the hydrologically-normal growing season in 2013 and miscanthus yield failed to rebound as expected, lagging behind pre-drought yields by an average of 53% over the next three years. In early 2014, nitrogen fertilizer was applied to half of mature (7-year-old) miscanthus plots in an effort to improve yields. In plots with annual post-emergence application of 60 kg ha-1 of urea, peak biomass was 29% greater than unfertilized miscanthus in 2014, and 113% greater in 2015, achieving statistically similar yields to the pre-drought average. Regional-scale models of perennial crop productivity use 30-year climate averages that are inadequate for predicting long-term effects of short-term extremes on perennial crops. Modeled predictions of perennial crop productivity incorporating repeated extreme weather events, observed crop response, and the use of management practices to mitigate water deficit demonstrate divergent effects on predicted yields.

  19. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  20. NET SALARY ADJUSTMENT

    CERN Multimedia

    Finance Division

    2001-01-01

    On 15 June 2001 the Council approved the correction of the discrepancy identified in the net salary adjustment implemented on 1st January 2001 by retroactively increasing the scale of basic salaries to achieve the 2.8% average net salary adjustment approved in December 2000. We should like to inform you that the corresponding adjustment will be made to your July salary. Full details of the retroactive adjustments will consequently be shown on your pay slip.