WorldWideScience

Sample records for adjustment model based

  1. Contact Angle Adjustment in Equation of States Based Pseudo-Potential Model

    CERN Document Server

    Hu, Anjie; Uddin, Rizwan

    2015-01-01

    Single component pseudo-potential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many research, it has been claimed that this model can be stable for density ratios larger than 1000, however, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in present work show that, by applying the contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with the new...

  2. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  3. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  4. Convexity Adjustments for ATS Models

    DEFF Research Database (Denmark)

    Murgoci, Agatha; Gaspar, Raquel M.

    Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes. As a re......Practitioners are used to value a broad class of exotic interest rate derivatives simply by preforming for what is known as convexity adjustments (or convexity corrections). We start by exploiting the relations between various interest rate models and their connections to measure changes....... As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...... formulas. Concretely for LIBOR in arrears (LIA) contracts, we derive the system of Riccatti ODE-s one needs to compute to obtain the exact adjustment. Based upon the ideas of Schrager and Pelsser (2006) we are also able to derive general swap adjustments useful, in particular, when dealing with constant...

  5. Remote Sensing-based Methodologies for Snow Model Adjustments in Operational Streamflow Prediction

    Science.gov (United States)

    Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.

    2015-12-01

    Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.

  6. Multi-Period Model of Portfolio Investment and Adjustment Based on Hybrid Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    RONG Ximin; LU Meiping; DENG Lin

    2009-01-01

    This paper proposes a multi-period portfolio investment model with class constraints, transaction cost, and indivisible securities. When an investor joins the securities market for the first time, he should decide on portfolio investment based on the practical conditions of securities market. In addition, investors should adjust the portfolio according to market changes, changing or not changing the category of risky securities. Markowitz mean-variance approach is applied to the multi-period portfolio selection problems. Because the sub-models are optimal mixed integer program, whose objective function is not unimodal and feasible set is with a particular structure, traditional optimization method usually fails to find a globally optimal solution. So this paper employs the hybrid genetic algorithm to solve the problem. Investment policies that accord with finance market and are easy to operate for investors are put forward with an illustration of application.

  7. A risk-adjusted CUSUM in continuous time based on the Cox model.

    Science.gov (United States)

    Biswas, Pinaki; Kalbfleisch, John D

    2008-07-30

    In clinical practice, it is often important to monitor the outcomes associated with participating facilities. In organ transplantation, for example, it is important to monitor and assess the outcomes of the transplants performed at the participating centers and send a signal if a significant upward trend in the failure rates is detected. In manufacturing and process control contexts, the cumulative summation (CUSUM) technique has been used as a sequential monitoring scheme for some time. More recently, the CUSUM has also been suggested for use in medical contexts. In this article, we outline a risk-adjusted CUSUM procedure based on the Cox model for a failure time outcome. Theoretical approximations to the average run length are obtained for this new proposal and for some discrete time procedures suggested in the literature. The proposed scheme and approximations are evaluated in simulations and illustrated on transplant facility data from the Scientific Registry of Transplant Recipients.

  8. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  9. Bulk Density Adjustment of Resin-Based Equivalent Material for Geomechanical Model Test

    Directory of Open Access Journals (Sweden)

    Pengxian Fan

    2015-01-01

    Full Text Available An equivalent material is of significance to the simulation of prototype rock in geomechanical model test. Researchers attempt to ensure that the bulk density of equivalent material is equal to that of prototype rock. In this work, barite sand was used to increase the bulk density of a resin-based equivalent material. The variation law of the bulk density was revealed in the simulation of a prototype rock of a different bulk density. Over 300 specimens were made for uniaxial compression test. Test results indicated that the substitution of quartz sand by barite sand had no apparent influence on the uniaxial compressive strength and elastic modulus of the specimens but can increase the bulk density, according to the proportional coarse aggregate content. An ideal linearity was found in the relationship between the barite sand substitution ratio and the bulk density. The relationship between the bulk density and the usage of coarse aggregate and barite sand was also presented. The test results provided an insight into the bulk density adjustment of resin-based equivalent materials.

  10. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    Science.gov (United States)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  11. Controlling fractional order chaotic systems based on Takagi-Sugeno fuzzy model and adaptive adjustment mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Yongai, E-mail: zhengyongai@163.co [Department of Computer, Yangzhou University, Yangzhou, 225009 (China); Nian Yibei [School of Energy and Power Engineering, Yangzhou University, Yangzhou, 225009 (China); Wang Dejin [Department of Computer, Yangzhou University, Yangzhou, 225009 (China)

    2010-12-01

    In this Letter, a kind of novel model, called the generalized Takagi-Sugeno (T-S) fuzzy model, is first developed by extending the conventional T-S fuzzy model. Then, a simple but efficient method to control fractional order chaotic systems is proposed using the generalized T-S fuzzy model and adaptive adjustment mechanism (AAM). Sufficient conditions are derived to guarantee chaos control from the stability criterion of linear fractional order systems. The proposed approach offers a systematic design procedure for stabilizing a large class of fractional order chaotic systems from the literature about chaos research. The effectiveness of the approach is tested on fractional order Roessler system and fractional order Lorenz system.

  12. Droop Control with an Adjustable Complex Virtual Impedance Loop based on Cloud Model Theory

    DEFF Research Database (Denmark)

    Li, Yan; Shuai, Zhikang; Xu, Qinming

    2016-01-01

    not only can avoid the active/reactive power coupling, but also it may reduce the output voltage drop of the PCC voltage. The proposed adjustable complex virtual impedance loop is putted into the conventional P/Q droop control to overcome the difficulty of getting the line impedance, which may change...... sometimes. The cloud model theory is applied to get online the changing line impedance value, which relies on the relevance of the reactive power responding the changing line impedance. The verification of the proposed control strategy is done according to the simulation in a low voltage microgrid in Matlab....

  13. Adjustment or updating of models

    Indian Academy of Sciences (India)

    D J Ewins

    2000-06-01

    In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.

  14. Dietary Reference Intakes for Zinc May Require Adjustment for Phytate Intake Based upon Model Predictions12

    OpenAIRE

    Hambidge, K Michael; Miller, Leland V.; Westcott, Jamie E.; Krebs, Nancy F

    2008-01-01

    The quantity of total dietary zinc (Zn) and phytate are the principal determinants of the quantity of absorbed Zn. Recent estimates of Dietary Reference Intakes (DRI) for Zn by the Institute of Medicine (IOM) were based on data from low-phytate or phytate-free diets. The objective of this project was to estimate the effects of increasing quantities of dietary phytate on these DRI. We used a trivariate model of the quantity of Zn absorbed as a function of dietary Zn and phytate with updated pa...

  15. Modelling and control of Base Plate Loading subsystem for The Motorized Adjustable Vertical Platform

    Science.gov (United States)

    Norsahperi, N. M. H.; Ahmad, S.; Fuad, A. F. M.; Mahmood, I. A.; Toha, S. F.; Akmeliawati, R.; Darsivan, F. J.

    2017-03-01

    Malaysia National Space Agency, ANGKASA is an organization that intensively undergoes many researches especially on space. On 2011, ANGKASA had built Satellite Assembly, Integration and Test Centre (AITC) for spacecraft development and test. Satellite will undergo numerous tests and one of it is Thermal test in Thermal Vacuum Chamber (TVC). In fact, TVC is located in cleanroom and on a platform. The only available facilities for loading and unloading the satellite is overhead crane. By utilizing the overhead crane can jeopardize the safety of the satellite. Therefore, Motorized vertical platform (MAVeP) for transferring the satellite into the TVC with capability to operate under cleanroom condition and limited space is proposed to facilitate the test. MAVeP is the combination of several mechanisms to produce horizontal and vertical motions with the ability to transfer the satellite from loading bay into TVC. The integration of both motions to elevate and transfer heavy loads with high precision capability will deliver major contributions in various industries such as aerospace and automotive. Base plate subsystem is capable to translate the horizontal motion by converting the angular motion from motor to linear motion by using rack and pinion mechanism. Generally a system can be modelled by performing physical modelling from schematic diagram or through system identification techniques. Both techniques are time consuming and required comprehensive understanding about the system, which may expose to error prone especially for complex mechanism. Therefore, a 3D virtual modelling technique has been implemented to represent the system in real world environment i.e. gravity to simulate control performance. The main purpose of this technique is to provide better model to analyse the system performance and capable to evaluate the dynamic behaviour of the system with visualization of the system performance, where a 3D prototype was designed and assembled in Solidworks

  16. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research

    Directory of Open Access Journals (Sweden)

    Miguel Angel Luque-Fernandez

    2016-10-01

    Full Text Available Abstract Background In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean. Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. Methods We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. Results All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001. However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3 for non-flexible piecewise exponential models. Conclusion We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  17. Model-based Adjustment of Droplet Characteristic for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available The major challenge in 3D electronic printing is the print resolution and accuracy. In this paper, a typical mode - lumped element modeling method (LEM - is adopted to simulate the droplet jetting characteristic. This modeling method can quickly get the droplet velocity and volume with a high accuracy. Experimental results show that LEM has a simpler structure with the sufficient simulation and prediction accuracy.

  18. Agricultural Production Structure Adjustment Scheme Evaluation and Selection Based on DEA Model for Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun

    2015-01-01

    DEA is a nonparametric method used in operation researches and economics fields for the evaluation of the production frontier. It has distinct intrinsic which is worth coping with assessment problems with multiple inputs in particular with multiple outputs. This paper usedDεC2R model of DEA to assess the comparative efficiency of the multiple schemes of agricultural industrial structure, at the end we chose the most favorable also known as "OPTIMAL" scheme. In addition to this, using some functional insights from DEA model non optimal schemes or less optimal schemes had also been improved to some extent. Assessment and selection of optimal schemes of agricultural industrial structure using DEA model gave a greater and better insight of agricultural industrial structure and was the first of such researches in Pakistan.

  19. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen

    2016-01-01

    , well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10–20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2–3 km away.......Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling...... overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5–30 min of rain data recorded by multiple rain gauges and propagating the rainfall...

  20. Army Physical Therapy Productivity According to the Performance Based Adjustment Model

    Science.gov (United States)

    2008-05-02

    Therapy Association ( APTA ) does not have guidelines for determining appropriate productivity standards. According to the APTA , "productivity...standards are generally determined by facilities, based on the specifics of their population, staffing mix, etc." ( APTA , 2007). Hence, the PBAM...benchmarking methodology is at odds with the APTA’s view of how to establish productivity standards. Despite the lack of APTA productivity guidance, the

  1. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  2. Improved constraints on models of glacial isostatic adjustment: a review of the contribution of ground-based geodetic observations

    NARCIS (Netherlands)

    King, M.A.; Altamimi, Z.; Boehm, J.; Bos, M.; Dach, R.; Elosegui, P. Fund, F.; Hernández-Pajares, M.; Lavallee, D.; Riva, E.M.; et al.

    2010-01-01

    The provision of accurate models of Glacial Isostatic Adjustment (GIA) is presently a priority need in climate studies, largely due to the potential of the Gravity Recovery and Climate Experiment (GRACE) data to be used to determine accurate and continent-wide assessments of ice mass change and hydr

  3. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen

    2016-08-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.

  4. Behavioral modeling of Digitally Adjustable Current Amplifier

    Directory of Open Access Journals (Sweden)

    Josef Polak

    2015-03-01

    Full Text Available This article presents the digitally adjustable current amplifier (DACA and its analog behavioral model (ABM, which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators. Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equations representing specific functions in the given level of the simulation model. The design of individual levels is always verified using PSpice simulations. The ABM model has been developed based on practically measured values of a number of DACA amplifier samples. The simulation results for proposed levels of the ABM model are shown and compared with the results of the real easurements of the active element DACA.

  5. Assessment and adjustment of sea surface salinity products from Aquarius in the southeast Indian Ocean based onin situ measurement and MyOcean modeled data

    Institute of Scientific and Technical Information of China (English)

    XIA Shenzhen; KE Changqing; ZHOU Xiaobing; ZHANG Jie

    2016-01-01

    Thein situ sea surface salinity (SSS) measurements from a scientific cruise to the western zone of the southeast Indian Ocean covering 30°-60°S, 80°-120°E are used to assess the SSS retrieved from Aquarius (Aquarius SSS). Wind speed and sea surface temperature (SST) affect the SSS estimates based on passive microwave radiation within the mid- to low-latitude southeast Indian Ocean. The relationships among thein situ, Aquarius SSS and wind-SST corrections are used to adjust the Aquarius SSS. The adjusted Aquarius SSS are compared with the SSS data from MyOcean model. Results show that: (1) Before adjustment: compared with MyOcean SSS, the Aquarius SSS in most of the sea areas is higher; but lower in the low-temperature sea areas located at the south of 55°S and west of 98°E. The Aquarius SSS is generally higher by 0.42 on average for the southeast Indian Ocean. (2) After adjustment: the adjustment greatly counteracts the impact of high wind speeds and improves the overall accuracy of the retrieved salinity (the mean absolute error of the Zonal mean is improved by 0.06, and the mean error is -0.05 compared with MyOcean SSS). Near the latitude 42°S, the adjusted SSS is well consistent with the MyOcean and the difference is approximately 0.004.

  6. Adjustment of endogenous concentrations in pharmacokinetic modeling.

    Science.gov (United States)

    Bauer, Alexander; Wolfsegger, Martin J

    2014-12-01

    Estimating pharmacokinetic parameters in the presence of an endogenous concentration is not straightforward as cross-reactivity in the analytical methodology prevents differentiation between endogenous and dose-related exogenous concentrations. This article proposes a novel intuitive modeling approach which adequately adjusts for the endogenous concentration. Monte Carlo simulations were carried out based on a two-compartment population pharmacokinetic (PK) model fitted to real data following intravenous administration. A constant and a proportional error model were assumed. The performance of the novel model and the method of straightforward subtraction of the observed baseline concentration from post-dose concentrations were compared in terms of terminal half-life, area under the curve from 0 to infinity, and mean residence time. Mean bias in PK parameters was up to 4.5 times better with the novel model assuming a constant error model and up to 6.5 times better assuming a proportional error model. The simulation study indicates that this novel modeling approach results in less biased and more accurate PK estimates than straightforward subtraction of the observed baseline concentration and overcomes the limitations of previously published approaches.

  7. Behavioral modeling of Digitally Adjustable Current Amplifier

    OpenAIRE

    Josef Polak; Lukas Langhammer; Jan Jerabek

    2015-01-01

    This article presents the digitally adjustable current amplifier (DACA) and its analog behavioral model (ABM), which is suitable for both ideal and advanced analyses of the function block using DACA as active element. There are four levels of this model, each being suitable for simulation of a certain degree of electronic circuits design (e.g. filters, oscillators, generators). Each model is presented through a schematic wiring in the simulation program OrCAD, including a description of equat...

  8. Effect of Flux Adjustments on Temperature Variability in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-12-27

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  9. Burden of Six Healthcare-Associated Infections on European Population Health: Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study.

    Directory of Open Access Journals (Sweden)

    Alessandro Cassini

    2016-10-01

    Full Text Available Estimating the burden of healthcare-associated infections (HAIs compared to other communicable diseases is an ongoing challenge given the need for good quality data on the incidence of these infections and the involved comorbidities. Based on the methodology of the Burden of Communicable Diseases in Europe (BCoDE project and 2011-2012 data from the European Centre for Disease Prevention and Control (ECDC point prevalence survey (PPS of HAIs and antimicrobial use in European acute care hospitals, we estimated the burden of six common HAIs.The included HAIs were healthcare-associated pneumonia (HAP, healthcare-associated urinary tract infection (HA UTI, surgical site infection (SSI, healthcare-associated Clostridium difficile infection (HA CDI, healthcare-associated neonatal sepsis, and healthcare-associated primary bloodstream infection (HA primary BSI. The burden of these HAIs was measured in disability-adjusted life years (DALYs. Evidence relating to the disease progression pathway of each type of HAI was collected through systematic literature reviews, in order to estimate the risks attributable to HAIs. For each of the six HAIs, gender and age group prevalence from the ECDC PPS was converted into incidence rates by applying the Rhame and Sudderth formula. We adjusted for reduced life expectancy within the hospital population using three severity groups based on McCabe score data from the ECDC PPS. We estimated that 2,609,911 new cases of HAI occur every year in the European Union and European Economic Area (EU/EEA. The cumulative burden of the six HAIs was estimated at 501 DALYs per 100,000 general population each year in EU/EEA. HAP and HA primary BSI were associated with the highest burden and represented more than 60% of the total burden, with 169 and 145 DALYs per 100,000 total population, respectively. HA UTI, SSI, HA CDI, and HA primary BSI ranked as the third to sixth syndromes in terms of burden of disease. HAP and HA primary BSI were

  10. BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION

    Institute of Scientific and Technical Information of China (English)

    Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning

    2004-01-01

    The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.

  11. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information

    NARCIS (Netherlands)

    L.M. Lamers (Leida)

    1999-01-01

    textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness

  12. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  13. Evaluation and adjustment of description of denitrification in the DailyDayCent and COUP models based on N2 and N2O laboratory incubation system measurements

    Science.gov (United States)

    Grosz, Balázs; Well, Reinhard; Dannenmann, Michael; Dechow, René; Kitzler, Barbara; Michel, Kerstin; Reent Köster, Jan

    2017-04-01

    data-sets are needed in view of the extreme spatio-temporal heterogeneity of denitrification. DASIM will provide such data based on laboratory incubations including measurement of N2O and N2 fluxes and determination of the relevant drivers. Here, we present how we will use these data to evaluate common biogeochemical process models (DailyDayCent, Coup) with respect to modeled NO, N2O and N2 fluxes from denitrification. The models are used with different settings. The first approximation is the basic "factory" setting of the models. The next step would show the precision in the results of the modeling after adjusting the appropriate parameters from the result of the measurement values and the "factory" results. The better adjustment and the well-controlled input and output measured parameters could provide a better understanding of the probable scantiness of the tested models which will be a basis for future model improvement.

  14. An optimization model for regional air pollutants mitigation based on the economic structure adjustment and multiple measures: A case study in Urumqi city, China.

    Science.gov (United States)

    Sun, Xiaowei; Li, Wei; Xie, Yulei; Huang, Guohe; Dong, Changjuan; Yin, Jianguang

    2016-11-01

    A model based on economic structure adjustment and pollutants mitigation was proposed and applied in Urumqi. Best-worst case analysis and scenarios analysis were performed in the model to guarantee the parameters accuracy, and to analyze the effect of changes of emission reduction styles. Results indicated that pollutant-mitigations of electric power industry, iron and steel industry, and traffic relied mainly on technological transformation measures, engineering transformation measures and structure emission reduction measures, respectively; Pollutant-mitigations of cement industry relied mainly on structure emission reduction measures and technological transformation measures; Pollutant-mitigations of thermal industry relied mainly on the four mitigation measures. They also indicated that structure emission reduction was a better measure for pollutants mitigation of Urumqi. Iron and steel industry contributed greatly in SO2, NOx and PM (particulate matters) emission reduction and should be given special attention in pollutants emission reduction. In addition, the scales of iron and steel industry should be reduced with the decrease of SO2 mitigation amounts. The scales of traffic and electric power industry should be reduced with the decrease of NOx mitigation amounts, and the scales of cement industry and iron and steel industry should be reduced with the decrease of PM mitigation amounts. The study can provide references of pollutants mitigation schemes to decision-makers for regional economic and environmental development in the 12th Five-Year Plan on National Economic and Social Development of Urumqi.

  15. DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS

    OpenAIRE

    Miodrag Manić; Zoran Stamenković; Milorad Mitković; Miloš Stojković; Duncan E.T. Shephard

    2015-01-01

    Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. Wit...

  16. A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2000-01-01

    The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.

  17. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates.

    Directory of Open Access Journals (Sweden)

    Sheng Zhong

    2016-10-01

    Full Text Available We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype, because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan.

  18. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    Science.gov (United States)

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  19. Burden of Six Healthcare-Associated Infections on European Population Health : Estimating Incidence-Based Disability-Adjusted Life Years through a Population Prevalence-Based Modelling Study

    NARCIS (Netherlands)

    Cassini, Alessandro; Plachouras, Diamantis; Eckmanns, Tim; Abu Sin, Muna; Blank, Hans-Peter; Ducomble, Tanja; Haller, Sebastian; Harder, Thomas; Klingeberg, Anja; Sixtensson, Madlen; Velasco, Edward; Weiß, Bettina; Kramarz, Piotr; Monnet, Dominique L; Kretzschmar, Mirjam|info:eu-repo/dai/nl/075187981; Suetens, Carl

    2016-01-01

    BACKGROUND: Estimating the burden of healthcare-associated infections (HAIs) compared to other communicable diseases is an ongoing challenge given the need for good quality data on the incidence of these infections and the involved comorbidities. Based on the methodology of the Burden of Communicabl

  20. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  1. Size-Adjustable Microdroplets Generation Based on Microinjection

    Directory of Open Access Journals (Sweden)

    Shibao Li

    2017-03-01

    Full Text Available Microinjection is a promising tool for microdroplet generation, while the microinjection for microdroplets generation still remains a challenging issue due to the Laplace pressure at the micropipette opening. Here, we apply a simple and robust substrate-contacting microinjection method to microdroplet generation, presenting a size-adjustable microdroplets generation method based on a critical injection (CI model. Firstly, the micropipette is adjusted to a preset injection pressure. Secondly, the micropipette is moved down to contact the substrate, then, the Laplace pressure in the droplet is no longer relevant and the liquid flows out in time. The liquid constantly flows out until the micropipette is lifted, ending the substrate-contacting situation, which results in the recovery of the Laplace pressure at the micropipette opening, and the liquid injection is terminated. We carry out five groups of experiments whereupon 1600 images are captured within each group and the microdroplet radius of each image is detected. Then we determine the relationship among microdroplet radius, radius at the micropipette opening, time, and pressure, and, two more experiments are conducted to verify the relationship. To verify the effectiveness of the substrate-contacting method and the relationship, we conducted two experiments with six desired microdroplet radii are set in each experiment, by adjusting the injection time with a given pressure, and adjusting the injection pressure with a given time. Then, six arrays of microdroplets are obtained in each experiment. The results of the experiments show that the standard errors of the microdroplet radii are less than 2% and the experimental errors fall in the range of ±5%. The average operating speed is 20 microdroplets/min and the minimum radius of the microdroplets is 25 μm. This method has a simple experimental setup that enables easy manipulation and lower cost.

  2. Development of a computational framework to adjust the pre-impact spine posture of a whole-body model based on cadaver tests data.

    Science.gov (United States)

    Poulard, David; Subit, Damien; Donlon, John-Paul; Kent, Richard W

    2015-02-26

    A method was developed to adjust the posture of a human numerical model to match the pre-impact posture of a human subject. The method involves pulling cables to prescribe the position and orientation of the head, spine and pelvis during a simulation. Six postured models matching the pre-impact posture measured on subjects tested in previous studies were created from a human numerical model. Posture scalars were measured on pre- and after applying the method to evaluate its efficiency. The lateral leaning angle θL defined between T1 and the pelvis in the coronal plane was found to be significantly improved after application with an average difference of 0.1±0.1° with the PMHS (4.6±2.7° before application). This method will be applied in further studies to analyze independently the contribution of pre-impact posture on impact response using human numerical models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Model for Adjustment of Aggregate Forecasts using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Taracena–Sanz L. F.

    2010-07-01

    Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.

  4. DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS

    Directory of Open Access Journals (Sweden)

    Miodrag Manić

    2015-12-01

    Full Text Available Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. With this method it is possible to design volumetric implants used for replacing a part of the bone or a plate type for fixation of a bone part. The sides of the implants, this one lying on the bone, are fully aligned with the anatomical shape of the bone surface which neighbors the fracture. The given model is designed for implants production utilizing any method, and it is ideal for 3D printing of implants.

  5. The relationship of values to adjustment in illness: a model for nursing practice.

    Science.gov (United States)

    Harvey, R M

    1992-04-01

    This paper proposes a model of the relationship between values, in particular health value, and adjustment to illness. The importance of values as well as the need for value change are described in the literature related to adjustment to physical disability and chronic illness. An empirical model, however, that explains the relationship of values to adjustment or adaptation has not been found by this researcher. Balance theory and its application to the abstract and perceived cognitions of health value and health perception are described here to explain the relationship of values like health value to outcomes associated with adjustment or adaptation to illness. The proposed model is based on the balance theories of Heider, Festinger and Feather. Hypotheses based on the model were tested and supported in a study of 100 adults with visible and invisible chronic illness. Nursing interventions based on the model are described and suggestions for further research discussed.

  6. Constructing stochastic models from deterministic process equations by propensity adjustment

    Directory of Open Access Journals (Sweden)

    Wu Jialiang

    2011-11-01

    Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic

  7. Bayes linear covariance matrix adjustment for multivariate dynamic linear models

    CERN Document Server

    Wilkinson, Darren J

    2008-01-01

    A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.

  8. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    Science.gov (United States)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  9. PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.

  10. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  11. A price adjustment process in a model of monopolistic competition

    NARCIS (Netherlands)

    J. Tuinstra

    2004-01-01

    We consider a price adjustment process in a model of monopolistic competition. Firms have incomplete information about the demand structure. When they set a price they observe the amount they can sell at that price and they observe the slope of the true demand curve at that price. With this informat

  12. The high-density lipoprotein-adjusted SCORE model worsens SCORE-based risk classification in a contemporary population of 30 824 Europeans

    DEFF Research Database (Denmark)

    Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G

    2015-01-01

    AIMS: Recent European guidelines recommend to include high-density lipoprotein (HDL) cholesterol in risk assessment for primary prevention of cardiovascular disease (CVD), using a SCORE-based risk model (SCORE-HDL). We compared the predictive performance of SCORE-HDL with SCORE in an independent......, contemporary, 'low-risk' European population, focusing on ability to identify those in need of intensified CVD prevention. METHODS AND RESULTS: Between 2003 and 2008, 46,092 individuals without CVD, diabetes, or statin use were enrolled in the Copenhagen General Population Study (CGPS). During a mean of 6.......8 years of follow-up, 339 individuals died of CVD. In the SCORE target population (age 40-65; n = 30,824), fewer individuals were at baseline categorized as high risk (≥5% 10-year risk of fatal CVD) using SCORE-HDL compared with SCORE (10 vs. 17% in men, 1 vs. 3% in women). SCORE-HDL did not improve...

  13. Adjustment problems and maladaptive relational style: a mediational model of sexual coercion in intimate relationships.

    Science.gov (United States)

    Salwen, Jessica K; O'Leary, K Daniel

    2013-07-01

    Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.

  14. Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model

    Directory of Open Access Journals (Sweden)

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis  a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice  and flour are not significant  to  changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the  government also began  to  socialize  in a lifestyle  of  non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population

  15. Design of motion adjusting system for space camera based on ultrasonic motor

    Science.gov (United States)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  16. Automatic Adjustment of Wide-Base Google Street View Panoramas

    Science.gov (United States)

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  17. A simple approach to adjust tidal forcing in fjord models

    Science.gov (United States)

    Hjelmervik, Karina; Kristensen, Nils Melsom; Staalstrøm, André; Røed, Lars Petter

    2017-07-01

    To model currents in a fjord accurate tidal forcing is of extreme importance. Due to complex topography with narrow and shallow straits, the tides in the innermost parts of a fjord are both shifted in phase and altered in amplitude compared to the tides in the open water outside the fjord. Commonly, coastal tide information extracted from global or regional models is used on the boundary of the fjord model. Since tides vary over short distances in shallower waters close to the coast, the global and regional tidal forcings are usually too coarse to achieve sufficiently accurate tides in fjords. We present a straightforward method to remedy this problem by simply adjusting the tides to fit the observed tides at the entrance of the fjord. To evaluate the method, we present results from the Oslofjord, Norway. A model for the fjord is first run using raw tidal forcing on its open boundary. By comparing modelled and observed time series of water level at a tidal gauge station close to the open boundary of the model, a factor for the amplitude and a shift in phase are computed. The amplitude factor and the phase shift are then applied to produce adjusted tidal forcing at the open boundary. Next, we rerun the fjord model using the adjusted tidal forcing. The results from the two runs are then compared to independent observations inside the fjord in terms of amplitude and phases of the various tidal components, the total tidal water level, and the depth integrated tidal currents. The results show improvements in the modelled tides in both the outer, and more importantly, the inner parts of the fjord.

  18. Modeling wind adjustment factor and midflame wind speed for Rothermel's surface fire spread model

    Science.gov (United States)

    Patricia L. Andrews

    2012-01-01

    Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...

  19. Adjustment in mothers of children with Asperger syndrome: an application of the double ABCX model of family adjustment.

    Science.gov (United States)

    Pakenham, Kenneth I; Samios, Christina; Sofronoff, Kate

    2005-05-01

    The present study examined the applicability of the double ABCX model of family adjustment in explaining maternal adjustment to caring for a child diagnosed with Asperger syndrome. Forty-seven mothers completed questionnaires at a university clinic while their children were participating in an anxiety intervention. The children were aged between 10 and 12 years. Results of correlations showed that each of the model components was related to one or more domains of maternal adjustment in the direction predicted, with the exception of problem-focused coping. Hierarchical regression analyses demonstrated that, after controlling for the effects of relevant demographics, stressor severity, pile-up of demands and coping were related to adjustment. Findings indicate the utility of the double ABCX model in guiding research into parental adjustment when caring for a child with Asperger syndrome. Limitations of the study and clinical implications are discussed.

  20. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel.

    Science.gov (United States)

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-10-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  1. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    Directory of Open Access Journals (Sweden)

    Lili Tian

    2016-10-01

    Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.

  2. Engine control system having fuel-based adjustment

    Science.gov (United States)

    Willi, Martin L.; Fiveland, Scott B.; Montgomery, David T.; Gong, Weidong

    2011-03-15

    A control system for an engine having a cylinder is disclosed having an engine valve configured to affect a fluid flow of the cylinder, an actuator configured to move the engine valve, and an in-cylinder sensor configured to generate a signal indicative of a characteristic of fuel entering the cylinder. The control system also has a controller in communication with the actuator and the sensor. The controller is configured to determine the characteristic of the fuel based on the signal and selectively regulate the actuator to adjust a timing of the engine valve based on the characteristic of the fuel.

  3. Thickness and Shape Synthetical Adjustment for DC Mill Based on Dynamic Nerve-Fuzzy Control

    Institute of Scientific and Technical Information of China (English)

    JIA Chun-yu; WANG Ying-rui; ZHOU Hui-feng

    2004-01-01

    Due to the complexity of thickness and shape synthetical adjustment system and the difficulties to build a mathematical model, a thickness and shape synthetical adjustment scheme on DC mill based on dynamic nerve-fuzzy control was put forward, and a self-organizing fuzzy control model was established. The structure of the network can be optimized dynamically. In the course of studying, the network can automatically adjust its structure based on the specific questions and make its structure the optimal. The input and output of the network are fuzzy sets, and the trained network can complete the composite relation, the fuzzy inference. For decreasing the off-line training time of BP network, the fuzzy sets are encoded. The simulation results indicate that the self-organizing fuzzy control based on dynamic neural network is better than traditional decoupling PID control.

  4. Variance-based fingerprint distance adjustment algorithm for indoor localization

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang

    2015-01-01

    The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.

  5. Capacitance-Based Frequency Adjustment of Micro Piezoelectric Vibration Generator

    Directory of Open Access Journals (Sweden)

    Xinhua Mao

    2014-01-01

    Full Text Available Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.

  6. On the hydrologic adjustment of climate-model projections: The potential pitfall of potential evapotranspiration

    Science.gov (United States)

    Milly, P.C.D.; Dunne, K.A.

    2011-01-01

    Hydrologic models often are applied to adjust projections of hydroclimatic change that come from climate models. Such adjustment includes climate-bias correction, spatial refinement ("downscaling"), and consideration of the roles of hydrologic processes that were neglected in the climate model. Described herein is a quantitative analysis of the effects of hydrologic adjustment on the projections of runoff change associated with projected twenty-first-century climate change. In a case study including three climate models and 10 river basins in the contiguous United States, the authors find that relative (i.e., fractional or percentage) runoff change computed with hydrologic adjustment more often than not was less positive (or, equivalently, more negative) than what was projected by the climate models. The dominant contributor to this decrease in runoff was a ubiquitous change in runoff (median 211%) caused by the hydrologic model's apparent amplification of the climate-model-implied growth in potential evapotranspiration. Analysis suggests that the hydrologic model, on the basis of the empirical, temperature-based modified Jensen-Haise formula, calculates a change in potential evapotranspiration that is typically 3 times the change implied by the climate models, which explicitly track surface energy budgets. In comparison with the amplification of potential evapotranspiration, central tendencies of other contributions from hydrologic adjustment (spatial refinement, climate-bias adjustment, and process refinement) were relatively small. The authors' findings highlight the need for caution when projecting changes in potential evapotranspiration for use in hydrologic models or drought indices to evaluate climatechange impacts on water. Copyright ?? 2011, Paper 15-001; 35,952 words, 3 Figures, 0 Animations, 1 Tables.

  7. Structural-Parameter-Based Jumping-Height-and-Distance Adjustment and Obstacle Sensing of a Bio-Inspired Jumping Robot

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2015-06-01

    Full Text Available Jumping-height-and-distance (JHD active adjustment capability is important for jumping robots to overcome different sizes of obstacle. This paper proposes a new structural parameter-based JHD active adjustment approach for our previous jumping robot. First, the JHD adjustments, modifying the lengths of different legs of the robot, are modelled and simulated. Then, three mechanisms for leg-length adjustment are proposed and compared, and the screw-and-nut mechanism is selected. And for adjusting of different structural parameters using this mechanism, the one with the best JHD adjusting performance and the lowest mechanical complexity is adopted. Thirdly, an obstacle-distance-and-height (ODH detection method using only one infrared sensor is designed. Finally, the performances of the proposed methods are tested. Experimental results show that the jumping-height-and distance adjustable ranges are 0.11 m and 0.96 m, respectively, which validates the effectiveness of the proposed JHD adjustment method.

  8. Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia

    NARCIS (Netherlands)

    Van der Wal, W.; Barnhoorn, A.; Stocchi, P.; Gradmann, S.; Wu, P.; Drury, M.; Vermeersen, L.L.A.

    2013-01-01

    Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes

  9. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    Science.gov (United States)

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  10. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Directory of Open Access Journals (Sweden)

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  11. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Science.gov (United States)

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes.

  12. Pitch Based Wind Turbine Intelligent Speed Setpoint Adjustment Algorithms

    Directory of Open Access Journals (Sweden)

    Asier González-González

    2014-06-01

    Full Text Available This work is aimed at optimizing the wind turbine rotor speed setpoint algorithm. Several intelligent adjustment strategies have been investigated in order to improve a reward function that takes into account the power captured from the wind and the turbine speed error. After different approaches including Reinforcement Learning, the best results were obtained using a Particle Swarm Optimization (PSO-based wind turbine speed setpoint algorithm. A reward improvement of up to 10.67% has been achieved using PSO compared to a constant approach and 0.48% compared to a conventional approach. We conclude that the pitch angle is the most adequate input variable for the turbine speed setpoint algorithm compared to others such as rotor speed, or rotor angular acceleration.

  13. 基于数字地价模型的城镇土地级别调整研究%Study on the Level Adjustment of Urban Land Based on Digital Land Price Model

    Institute of Scientific and Technical Information of China (English)

    王增军; 孔小勇; 朱丽玲

    2009-01-01

    针对城镇基准地价更新中土地级别范围调整问题,以抚州市为例,提出了运用数字地价模型调整土地级别.在Arcview GIS软件支持下,根据已交易地价点的矢量数据,采用格网建模方式,选择合理的空间内插模型,以生成三维地价模型,直观地再现了土地价格格局,再内插计算出该区域内每一个单元的地价,然后根据一定的地价幅度生成土地级别图,与原有的土地级别进行土地级别范围的确定和调整,为政府调节土地市场起到一定的借鉴作用.%Taking Fuzhou City as an example, based on rectify the land grade area to the benchmark price of land problem, digital land price model was introduced to alter land grade, the transaction of land price was used. Based on Arc view G1S system' s reasonable interpolation method, the land price was simulated and grid treatment was made on the transaction of land price. Three-dimension land price model was established, 3D land price model was used to display intuitively the land price spatial patten. The picture of land grade was created according to definite land price, the picture was used to alter current land gradation. The results may provide support for government to adjust land market.

  14. Adjustable box-wing model for solar radiation pressure impacting GPS satellites

    Science.gov (United States)

    Rodriguez-Solano, C. J.; Hugentobler, U.; Steigenberger, P.

    2012-04-01

    One of the major uncertainty sources affecting Global Positioning System (GPS) satellite orbits is the direct solar radiation pressure. In this paper a new model for the solar radiation pressure on GPS satellites is presented that is based on a box-wing satellite model, and assumes nominal attitude. The box-wing model is based on the physical interaction between solar radiation and satellite surfaces, and can be adjusted to fit the GPS tracking data. To compensate the effects of solar radiation pressure, the International GNSS Service (IGS) analysis centers employ a variety of approaches, ranging from purely empirical models based on in-orbit behavior, to physical models based on pre-launch spacecraft structural analysis. It has been demonstrated, however, that the physical models fail to predict the real orbit behavior with sufficient accuracy, mainly due to deviations from nominal attitude, inaccurately known optical properties, or aging of the satellite surfaces. The adjustable box-wing model presented in this paper is an intermediate approach between the physical/analytical models and the empirical models. The box-wing model fits the tracking data by adjusting mainly the optical properties of the satellite's surfaces. In addition, the so called Y-bias and a parameter related to a rotation lag angle of the solar panels around their rotation axis (about 1.5° for Block II/IIA and 0.5° for Block IIR) are estimated. This last parameter, not previously identified for GPS satellites, is a key factor for precise orbit determination. For this study GPS orbits are generated based on one year (2007) of tracking data, with the processing scheme derived from the Center for Orbit Determination in Europe (CODE). Two solutions are computed, one using the adjustable box-wing model and one using the CODE empirical model. Using this year of data the estimated parameters and orbits are analyzed. The performance of the models is comparable, when looking at orbit overlap and orbit

  15. Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Directory of Open Access Journals (Sweden)

    L.-P. Wang

    2015-02-01

    Full Text Available Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive technique and the commonly-used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2 (Edinburgh, UK during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently-calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban

  16. Evaluating changes in matrix based, recovery-adjusted concentrations in paired data for pesticides in groundwater

    Science.gov (United States)

    Zimmerman, Tammy M.; Breen, Kevin J.

    2012-01-01

    Pesticide concentration data for waters from selected carbonate-rock aquifers in agricultural areas of Pennsylvania were collected in 1993–2009 for occurrence and distribution assessments. A set of 30 wells was visited once in 1993–1995 and again in 2008–2009 to assess concentration changes. The data include censored matched pairs (nondetections of a compound in one or both samples of a pair). A potentially improved approach for assessing concentration changes is presented where (i) concentrations are adjusted with models of matrix-spike recovery and (ii) area-wide temporal change is tested by use of the paired Prentice-Wilcoxon (PPW) statistical test. The PPW results for atrazine, simazine, metolachlor, prometon, and an atrazine degradate, deethylatrazine (DEA), are compared using recovery-adjusted and unadjusted concentrations. Results for adjusted compared with unadjusted concentrations in 2008–2009 compared with 1993–1995 were similar for atrazine and simazine (significant decrease; 95% confidence level) and metolachlor (no change) but differed for DEA (adjusted, decrease; unadjusted, increase) and prometon (adjusted, decrease; unadjusted, no change). The PPW results were different on recovery-adjusted compared with unadjusted concentrations. Not accounting for variability in recovery can mask a true change, misidentify a change when no true change exists, or assign a direction opposite of the true change in concentration that resulted from matrix influences on extraction and laboratory method performance. However, matrix-based models of recovery derived from a laboratory performance dataset from multiple studies for national assessment, as used herein, rather than time- and study-specific recoveries may introduce uncertainty in recovery adjustments for individual samples that should be considered in assessing change.

  17. Simulation-based coefficients for adjusting climate impact on energy consumption of commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Na; Makhmalbaf, Atefe; Srivastava, Viraj; Hathaway, John E.

    2016-11-23

    This paper presents a new technique for and the results of normalizing building energy consumption to enable a fair comparison among various types of buildings located near different weather stations across the U.S. The method was developed for the U.S. Building Energy Asset Score, a whole-building energy efficiency rating system focusing on building envelope, mechanical systems, and lighting systems. The Asset Score is calculated based on simulated energy use under standard operating conditions. Existing weather normalization methods such as those based on heating and cooling degrees days are not robust enough to adjust all climatic factors such as humidity and solar radiation. In this work, over 1000 sets of climate coefficients were developed to separately adjust building heating, cooling, and fan energy use at each weather station in the United States. This paper also presents a robust, standardized weather station mapping based on climate similarity rather than choosing the closest weather station. This proposed simulated-based climate adjustment was validated through testing on several hundreds of thousands of modeled buildings. Results indicated the developed climate coefficients can isolate and adjust for the impacts of local climate for asset rating.

  18. Incremental Training for SVM-Based Classification with Keyword Adjusting

    Institute of Scientific and Technical Information of China (English)

    SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo

    2004-01-01

    This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.

  19. Disaster Hits Home: A Model of Displaced Family Adjustment after Hurricane Katrina

    Science.gov (United States)

    Peek, Lori; Morrissey, Bridget; Marlatt, Holly

    2011-01-01

    The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…

  20. R.M. Solow Adjusted Model of Economic Growth

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  1. Systematic review of risk adjustment models of hospital length of stay (LOS).

    Science.gov (United States)

    Lu, Mingshan; Sajobi, Tolulope; Lucyk, Kelsey; Lorenzetti, Diane; Quan, Hude

    2015-04-01

    Policy decisions in health care, such as hospital performance evaluation and performance-based budgeting, require an accurate prediction of hospital length of stay (LOS). This paper provides a systematic review of risk adjustment models for hospital LOS, and focuses primarily on studies that use administrative data. MEDLINE, EMBASE, Cochrane, PubMed, and EconLit were searched for studies that tested the performance of risk adjustment models in predicting hospital LOS. We included studies that tested models developed for the general inpatient population, and excluded those that analyzed risk factors only correlated with LOS, impact analyses, or those that used disease-specific scales and indexes to predict LOS. Our search yielded 3973 abstracts, of which 37 were included. These studies used various disease groupers and severity/morbidity indexes to predict LOS. Few models were developed specifically for explaining hospital LOS; most focused primarily on explaining resource spending and the costs associated with hospital LOS, and applied these models to hospital LOS. We found a large variation in predictive power across different LOS predictive models. The best model performance for most studies fell in the range of 0.30-0.60, approximately. The current risk adjustment methodologies for predicting LOS are still limited in terms of models, predictors, and predictive power. One possible approach to improving the performance of LOS risk adjustment models is to include more disease-specific variables, such as disease-specific or condition-specific measures, and functional measures. For this approach, however, more comprehensive and standardized data are urgently needed. In addition, statistical methods and evaluation tools more appropriate to LOS should be tested and adopted.

  2. Setting of Agricultural Insurance Premium Rate and the Adjustment Model

    Institute of Scientific and Technical Information of China (English)

    HUANG Ya-lin

    2012-01-01

    First,using the law of large numbers,I analyze the setting principle of agricultural insurance premium rate,and take the case of setting of adult sow premium rate for study,to draw the conclusion that with the continuous promotion of agricultural insurance,increase in the types of agricultural insurance and increase in the number of the insured,the premium rate should also be adjusted opportunely.Then,on the basis of Bayes’ theorem,I adjust and calibrate the claim frequency and the average claim,in order to correctly adjust agricultural insurance premium rate;take the case of forest insurance for premium rate adjustment analysis.In setting and adjustment of agricultural insurance premium rate,in order to make the expected results well close to the real results,it is necessary to apply the probability estimates in a large number of risk units;focus on the establishment of agricultural risk database,to timely adjust agricultural insurance premium rate.

  3. Adjusting Felder-Silverman learning styles model for application in adaptive e-learning

    OpenAIRE

    Mihailović Đorđe; Despotović-Zrakić Marijana; Bogdanović Zorica; Barać Dušan; Vujin Vladimir

    2012-01-01

    This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organi...

  4. Hydrologic modeling using elevationally adjusted NARR and NARCCAP regional climate-model simulations: Tucannon River, Washington

    Science.gov (United States)

    Praskievicz, Sarah; Bartlein, Patrick

    2014-09-01

    An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.

  5. Detailed Theoretical Model for Adjustable Gain-Clamped Semiconductor Optical Amplifier

    Directory of Open Access Journals (Sweden)

    Lin Liu

    2012-01-01

    Full Text Available The adjustable gain-clamped semiconductor optical amplifier (AGC-SOA uses two SOAs in a ring-cavity topology: one to amplify the signal and the other to control the gain. The device was designed to maximize the output saturated power while adjusting gain to regulate power differences between packets without loss of linearity. This type of subsystem can be used for power equalisation and linear amplification in packet-based dynamic systems such as passive optical networks (PONs. A detailed theoretical model is presented in this paper to simulate the operation of the AGC-SOA, which gives a better understanding of the underlying gain clamping mechanics. Simulations and comparisons with steady-state and dynamic gain modulation experimental performance are given which validate the model.

  6. Block adjustment of airborne InSAR based on interferogram phase and POS data

    Science.gov (United States)

    Yue, Xijuan; Zhao, Yinghui; Han, Chunming; Dou, Changyong

    2015-12-01

    High-precision surface elevation information in large scale can be obtained efficiently by airborne Interferomatric Synthetic Aperture Radar (InSAR) system, which is recently becoming an important tool to acquire remote sensing data and perform mapping applications in the area where surveying and mapping is difficult to be accomplished by spaceborne satellite or field working. . Based on the study of the three-dimensional (3D) positioning model using interferogram phase and Position and Orientation System (POS) data and block adjustment error model, a block adjustment method to produce seamless wide-area mosaic product generated from airborne InSAR data is proposed in this paper. The effect of 6 parameters, including trajectory and attitude of the aircraft, baseline length and incline angle, slant range, and interferometric phase, on the 3D positioning accuracy is quantitatively analyzed. Using the data acquired in the field campaign conducted in Mianyang county Sichuan province, China in June 2011, a mosaic seamless Digital Elevation Model (DEM) product was generated from 76 images in 4 flight strips by the proposed block adjustment model. The residuals of ground control points (GCPs), the absolute positioning accuracy of check points (CPs) and the relative positioning accuracy of tie points (TPs) both in same and adjacent strips were assessed. The experimental results suggest that the DEM and Digital Orthophoto Map (DOM) product generated by the airborne InSAR data with sparse GCPs can meet mapping accuracy requirement at scale of 1:10 000.

  7. Processing Approach of Non-linear Adjustment Models in the Space of Non-linear Models

    Institute of Scientific and Technical Information of China (English)

    LI Chaokui; ZHU Qing; SONG Chengfang

    2003-01-01

    This paper investigates the mathematic features of non-linear models and discusses the processing way of non-linear factors which contributes to the non-linearity of a nonlinear model. On the basis of the error definition, this paper puts forward a new adjustment criterion, SGPE.Last, this paper investigates the solution of a non-linear regression model in the non-linear model space and makes the comparison between the estimated values in non-linear model space and those in linear model space.

  8. FLC based adjustable speed drives for power quality enhancement

    Directory of Open Access Journals (Sweden)

    Sukumar Darly

    2010-01-01

    Full Text Available This study describes a new approach based on fuzzy algorithm to suppress the current harmonic contents in the output of an inverter. Inverter system using fuzzy controllers provide ride-through capability during voltage sags, reduces harmonics, improves power factor and high reliability, less electromagnetic interference noise, low common mode noise and extends output voltage range. A feasible test is implemented by building a model of three-phase impedance source inverter, which is designed and controlled on the basis of proposed considerations. It is verified from the practical point of view that these new approaches are more effective and acceptable to minimize the harmonic distortion and improves the quality of power. Due to the complex algorithm, their realization often calls for a compromise between cost and performance. The proposed optimizing strategies may be applied in variable-frequency dc-ac inverters, UPSs, and ac drives.

  9. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  10. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4

  11. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    NARCIS (Netherlands)

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4

  12. The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs

    Institute of Scientific and Technical Information of China (English)

    RAO Lan-lan; CAI Dong-han

    2004-01-01

    We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.

  13. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Science.gov (United States)

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.

  14. A New Method for Identifying the Model Error of Adjustment System

    Institute of Scientific and Technical Information of China (English)

    TAO Benzao; ZHANG Chaoyu

    2005-01-01

    Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given.

  15. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  16. Benchmarking Judgmentally Adjusted Forecasts

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)

    2017-01-01

    textabstractMany publicly available macroeconomic forecasts are judgmentally adjusted model-based forecasts. In practice, usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights given

  17. Benchmarking judgmentally adjusted forecasts

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)

    2015-01-01

    markdownabstractMany publicly available macroeconomic forecasts are judgmentally-adjusted model-based forecasts. In practice usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights

  18. Benchmarking judgmentally adjusted forecasts

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); L.P. de Bruijn (Bert)

    2015-01-01

    markdownabstractMany publicly available macroeconomic forecasts are judgmentally-adjusted model-based forecasts. In practice usually only a single final forecast is available, and not the underlying econometric model, nor are the size and reason for adjustment known. Hence, the relative weights give

  19. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2014-05-01

    Full Text Available In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs, which are measured by the integrated positioning and orientation system (POS of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  20. Study on Posture Adjusting System of Spacecraft Based on Stewart Mechanism

    Science.gov (United States)

    Gao, Feng; Feng, Wei; Dai, Wei-Bing; Yi, Wang-Min; Liu, Guang-Tong; Zheng, Sheng-Yu

    In this paper, the design principles of adjusting parallel mechanisms is introduced, including mechanical subsystem, control sub-system and software sub-system. According to the design principles, key technologies for system of adjusting parallel mechanisms are analyzed. Finally, design specifications for system of adjusting parallel mechanisms are proposed based on requirement of spacecraft integration and it can apply to cabin docking, solar array panel docking and camera docking.

  1. Demography-adjusted tests of neutrality based on genome-wide SNP data

    KAUST Repository

    Rafajlović, Marina

    2014-08-01

    Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.

  2. Nonparametric randomization-based covariate adjustment for stratified analysis of time-to-event or dichotomous outcomes.

    Science.gov (United States)

    Hussey, Michael A; Koch, Gary G; Preisser, John S; Saville, Benjamin R

    2016-01-01

    Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.

  3. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    Directory of Open Access Journals (Sweden)

    Kaichang Di

    2016-08-01

    Full Text Available In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy.

  4. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information.

    Science.gov (United States)

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-08-13

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy.

  5. ASPECTS OF DESIGN PROCESS AND CAD MODELLING OF AN ADJUSTABLE CENTRIFUGAL COUPLING

    Directory of Open Access Journals (Sweden)

    Adrian BUDALĂ

    2015-05-01

    Full Text Available The paper deals with constructive and functional elements of an adjustable coupling with friction shoes and adjustable driving. Also, the paper shows few stages of the design process, some advantages of the using CAD software and some comparative results prototype vs. CAD model.

  6. Parental Support, Coping Strategies, and Psychological Adjustment: An Integrative Model with Late Adolescents.

    Science.gov (United States)

    Holahan, Charles J.; And Others

    1995-01-01

    An integrative predictive model was applied to responses of 241 college freshmen to examine interrelationships among parental support, adaptive coping strategies, and psychological adjustment. Social support from both parents and a nonconflictual parental relationship were positively associated with adolescents' psychological adjustment. (SLD)

  7. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry.

    Science.gov (United States)

    Meyer, Andrew J; Patten, Carolynn; Fregly, Benjamin J

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient's lower extremity muscle excitations contribute to the patient's lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient's musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with

  8. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model for the Greater London

    Directory of Open Access Journals (Sweden)

    Ali P. Yunus

    2016-04-01

    Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.

  9. 基于流动性调整CAViaR模型的风险度量方法%A New Method of Risk Measurement Based on Liquidity Adjusted CAViaR Models

    Institute of Scientific and Technical Information of China (English)

    闫昌荣

    2012-01-01

    The measurement and management of the liquidity risk, which is one of the major risks faced by investors, has always been one of the most difficult problems in both academia and practice. In this paper, we propose a new risk measurement method, called liquidity adjusted CAViaR models, to help investors manage future risks better, especially liquidity risk. This method can directly reflect the impacts of liquidity changes on the future risks, and the future liquidity adjusted VaR can be calculated simultanuously. Empirical studies suggest that this model can characterize the behavior of dynamic changes of liquidity risks in Chinese stock market, and conclude that a substantial decline of stock liquidity may lead to a significant increase of future risks. The positive liquidity may have a more significant effect than that of negative liquidity, thus it is more worthy of attentions from investors.%本文提出了流动性风险度量的一个新的方法,流动性调整的CAViaR模型。该模型能够直接反映资产流动性的变动对未来风险的影响,并在此基础上计算资产未来经过流动性调整的风险VaR,从而使投资者能够更好地管理风险,尤其是流动性风险。实证研究表明,该模型能够较好地刻画中国股市流动性风险的动态变化特征;并且发现股票流动性的大幅下降通常导致未来风险明显加大,且正向流动性下降所带来的风险往往较负向流动性要更大,因此更值得投资者关注。

  10. Modeling of an Adjustable Beam Solid State Light Project

    Science.gov (United States)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  11. Mixed continuous/discrete time modelling with exact time adjustments

    NARCIS (Netherlands)

    Rovers, K.C.; Kuper, Jan; van de Burgwal, M.D.; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria

    2011-01-01

    Many systems interact with their physical environment. Design of such systems need a modelling and simulation tool which can deal with both the continuous and discrete aspects. However, most current tools are not adequately able to do so, as they implement both continuous and discrete time signals

  12. Procedures for adjusting regional regression models of urban-runoff quality using local data

    Science.gov (United States)

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  13. Last deglacial relative sea level variations in Antarctica derived from glacial isostatic adjustment modelling

    Directory of Open Access Journals (Sweden)

    Jun'ichi Okuno

    2013-11-01

    Full Text Available We present relative sea level (RSL curves in Antarctica derived from glacial isostatic adjustment (GIA predictions based on the melting scenarios of the Antarctic ice sheet since the Last Glacial Maximum (LGM given in previous works. Simultaneously, Holocene-age RSL observations obtained at the raised beaches along the coast of Antarctica are shown to be in agreement with the GIA predictions. The differences from previously published ice-loading models regarding the spatial distribution and total mass change of the melted ice are significant. These models were also derived from GIA modelling; the variations can be attributed to the lack of geological and geographical evidence regarding the history of crustal movement due to ice sheet evolution. Next, we summarise the previously published ice load models and demonstrate the RSL curves based on combinations of different ice and earth models. The RSL curves calculated by GIA models indicate that the model dependence of both the ice and earth models is significantly large at several sites where RSL observations were obtained. In particular, GIA predictions based on the thin lithospheric thickness show the spatial distributions that are dependent on the melted ice thickness at each sites. These characteristics result from the short-wavelength deformation of the Earth. However, our predictions strongly suggest that it is possible to find the average ice model despite the use of the different models of lithospheric thickness. By sea level and crustal movement observations, we can deduce the geometry of the post-LGM ice sheets in detail and remove the GIA contribution from the crustal deformation and gravity change observed by space geodetic techniques, such as GPS and GRACE, for the estimation of the Antarctic ice mass change associated with recent global warming.

  14. Second-Order Polynomial Equation-Based Block Adjustment for Orthorectification of DISP Imagery

    Directory of Open Access Journals (Sweden)

    Guoqing Zhou

    2016-08-01

    Full Text Available Due to the lack of ground control points (GCPs and parameters of satellite orbits, as well as the interior and exterior orientation parameters of cameras in historical declassified intelligence satellite photography (DISP imagery, a second order polynomial equation-based block adjustment model is proposed for orthorectification of DISP imagery. With the proposed model, 355 DISP images from four missions and five orbits are orthorectified, with an approximate accuracy of 2.0–3.0 m. The 355 orthorectified images are assembled into a seamless, full-coverage mosaic image map of the karst area of Guangxi, China. The accuracy of the mosaicked image map is within 2.0–4.0 m when compared to 78 checkpoints measured by Real–Time Kinematic (RTK GPS surveys. The assembled image map will be delivered to the Guangxi Geological Library and released to the public domain and the research community.

  15. Risk Adjustment for Determining Surgical Site Infection in Colon Surgery: Are All Models Created Equal?

    Science.gov (United States)

    Muratore, Sydne; Statz, Catherine; Glover, J J; Kwaan, Mary; Beilman, Greg

    2016-04-01

    Colon surgical site infections (SSIs) are being utilized increasingly as a quality measure for hospital reimbursement and public reporting. The Centers for Medicare and Medicaid Services (CMS) now require reporting of colon SSI, which is entered through the U.S. Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). However, the CMS's model for determining expected SSIs uses different risk adjustment variables than does NHSN. We hypothesize that CMS's colon SSI model will predict lower expected infection rates than will NHSN. Colon SSI data were reported prospectively to NHSN from 2012-2014 for the six Fairview Hospitals (1,789 colon procedures). We compared expected quarterly SSIs and standardized infection ratios (SIRs) generated by CMS's risk-adjustment model (age and American Society of Anesthesiologist [ASA] classification) vs. NHSN's (age, ASA classification, procedure duration, endoscope [including laparoscope] use, medical school affiliation, hospital bed number, and incision class). The patients with more complex colon SSIs were more likely to be male (60% vs. 44%; p = 0.011), to have contaminated/dirty incisions (21% vs. 10%; p = 0.005), and to have longer operations (235 min vs. 156 min; p < 0.001) and were more likely to be at a medical school-affiliated hospital (53% vs. 40%; p = 0.032). For Fairview Hospitals combined, CMS calculated a lower number of expected quarterly SSIs than did the NHSN (4.58 vs. 5.09 SSIs/quarter; p = 0.002). This difference persisted in a university hospital (727 procedures; 2.08 vs. 2.33; p = 0.002) and a smaller, community-based hospital (565 procedures; 1.31 vs. 1.42; p = 0.002). There were two quarters in which CMS identified Fairview's SIR as an outlier for complex colon SSIs (p = 0.05 and 0.04), whereas NHSN did not (p = 0.06 and 0.06). The CMS's current risk-adjustment model using age and ASA classification predicts lower rates of expected colon

  16. 基于TransModeler的拥堵区域交通流量调控方法研究%Study of Traffic Flow Adjustment Methods to Congestion Area Based on TransModeler

    Institute of Scientific and Technical Information of China (English)

    杨慧; 成卫; 肖海承; 潘云伟; 张东明

    2011-01-01

    针对中小城市临时交通管制措施下出现的交通拥堵状况,应用微观交通仿真软件TransModeler建立拥堵区域路网模型.运用动态交通组织手段来实现拥堵区域交通总量控制,均衡路网交通压力.以曲靖市中心城区拥堵区域动态交通组织优化为实例,通过仿真结果对方案实施前后的各拥堵路段的交通流运行状况进行评价,证明了流量调控方法对缓解拥堵区域交通状况具有一定的科学实效性.%For the traffic jams under the temporary traffic-control measures in small and medium-sized city, microscopic traffic simulation software TransModeler is applied for establishing regional network under the traffic congestion. Dynamic model of traffic organization means is used to realize total amount control of regional traffic congestion, and balance traffic pressure of network. And dynamic optimization of traffic congestion of the downtown area in Qujing city is took for example, it is through simulation results of optimizing organization before and after to evaluate traffic flow of each of road traffic congestion. It proves that flow adjustment methods can ease the regional traffic congestion and it has certain scientific effectiveness.

  17. Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis

    Institute of Scientific and Technical Information of China (English)

    GENG Rui; CHENG Peng; CUI Deguang

    2009-01-01

    Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.

  18. Risk-based surveillance: Estimating the effect of unwarranted confounder adjustment

    DEFF Research Database (Denmark)

    Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo

    2011-01-01

    We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differences...... between estimates of surveillance system performance with and without unwarranted confounder adjustment were shown to be of both numerical and economical significance. Analytical procedures applied to multiple risk factor datasets which yield confounder-adjusted risk estimates should be carefully...... considered for their appropriateness, if the risk estimates are to be used for informing risk-based surveillance systems....

  19. Adjusting Felder-Silverman learning styles model for application in adaptive e-learning

    Directory of Open Access Journals (Sweden)

    Mihailović Đorđe

    2012-01-01

    Full Text Available This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organizational sciences, University of Belgrade, during winter semester of 2009/10, on sample of 318 students. The students from the experimental group were divided in three clusters, based on data about their styles identified using adjusted Felder-Silverman questionnaire. Data about learning styles collected during the research were used to determine typical groups of students and then to classify students into these groups. The classification was performed using data mining techniques. Adaptation of the e-learning courses was implemented according to results of data analysis. Evaluation showed that there was statistically significant difference in the results of students who attended the course adapted by using the described method, in comparison with results of students who attended course that was not adapted.

  20. Parametric Adjustments to the Rankine Vortex Wind Model for Gulf of Mexico Hurricanes

    Science.gov (United States)

    2012-11-01

    Rankine Vortex (RV) model [25], the SLOSH model [28], the Holland model [29], the vortex simulation model [30], and the Willoughby and Rahn model [31...www.asme.org/terms/Terms_Use.cfm where Pn ¼ Pc 20:69 þ 1:33Vm þ 0:11u (3) Willoughby et al. [34] provide an alternative formula to estimate Rm as a function of...MacAfee and Pearson [26], and Willoughby et al. [34] also made adjustments which were tailored for mid- latitude applications. 3 Adjustments to the RV

  1. On the compensation between cloud feedback and cloud adjustment in climate models

    Science.gov (United States)

    Chung, Eui-Seok; Soden, Brian J.

    2017-04-01

    Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.

  2. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    Science.gov (United States)

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  3. 论生产函数调整模型%Study on adjustable production function model

    Institute of Scientific and Technical Information of China (English)

    葛新权

    2003-01-01

    Cobb-Douglas production function is a nonlinear model which is most frequently used and can beenchanged into linear model. There is no doubt for the reasonability of this logarithm linearization. Thispaper gives an new proposition on the basis of deeper analysis that there is a defect with this linearization,hence adjustable production function model is proposed to eliminate it.

  4. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Energy Technology Data Exchange (ETDEWEB)

    Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.

    2016-11-01

    This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)

  5. Effect of the spray volume adjustment model on the efficiency of fungicides and residues in processing tomato

    Directory of Open Access Journals (Sweden)

    Henryk Ratajkiewicz

    2016-08-01

    Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.

  6. UPDATING THE FREIGHT TRUCK STOCK ADJUSTMENT MODEL: 1997 VEHICLE INVENTORY AND USE SURVEY DATA

    Energy Technology Data Exchange (ETDEWEB)

    Davis, S.C.

    2000-11-16

    The Energy Information Administration's (EIA's) National Energy Modeling System (NEMS) Freight Truck Stock Adjustment Model (FTSAM) was created in 1995 relying heavily on input data from the 1992 Economic Census, Truck Inventory and Use Survey (TIUS). The FTSAM is part of the NEMS Transportation Sector Model, which provides baseline energy projections and analyzes the impacts of various technology scenarios on consumption, efficiency, and carbon emissions. The base data for the FTSAM can be updated every five years as new Economic Census information is released. Because of expertise in using the TIUS database, Oak Ridge National Laboratory (ORNL) was asked to assist the EIA when the new Economic Census data were available. ORNL provided the necessary base data from the 1997 Vehicle Inventory and Use Survey (VIUS) and other sources to update the FTSAM. The next Economic Census will be in the year 2002. When those data become available, the EIA will again want to update the FTSAM using the VIUS. This report, which details the methodology of estimating and extracting data from the 1997 VIUS Microdata File, should be used as a guide for generating the data from the next VIUS so that the new data will be as compatible as possible with the data in the model.

  7. Steps in the construction and verification of an explanatory model of psychosocial adjustment

    Directory of Open Access Journals (Sweden)

    Arantzazu Rodríguez-Fernández

    2016-06-01

    Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research

  8. Adjustment of regional climate model output for modeling the climatic mass balance of all glaciers on Svalbard.

    Science.gov (United States)

    Möller, Marco; Obleitner, Friedrich; Reijmer, Carleen H; Pohjola, Veijo A; Głowacki, Piotr; Kohler, Jack

    2016-05-27

    Large-scale modeling of glacier mass balance relies often on the output from regional climate models (RCMs). However, the limited accuracy and spatial resolution of RCM output pose limitations on mass balance simulations at subregional or local scales. Moreover, RCM output is still rarely available over larger regions or for longer time periods. This study evaluates the extent to which it is possible to derive reliable region-wide glacier mass balance estimates, using coarse resolution (10 km) RCM output for model forcing. Our data cover the entire Svalbard archipelago over one decade. To calculate mass balance, we use an index-based model. Model parameters are not calibrated, but the RCM air temperature and precipitation fields are adjusted using in situ mass balance measurements as reference. We compare two different calibration methods: root mean square error minimization and regression optimization. The obtained air temperature shifts (+1.43°C versus +2.22°C) and precipitation scaling factors (1.23 versus 1.86) differ considerably between the two methods, which we attribute to inhomogeneities in the spatiotemporal distribution of the reference data. Our modeling suggests a mean annual climatic mass balance of -0.05 ± 0.40 m w.e. a(-1) for Svalbard over 2000-2011 and a mean equilibrium line altitude of 452 ± 200 m  above sea level. We find that the limited spatial resolution of the RCM forcing with respect to real surface topography and the usage of spatially homogeneous RCM output adjustments and mass balance model parameters are responsible for much of the modeling uncertainty. Sensitivity of the results to model parameter uncertainty is comparably small and of minor importance.

  9. Adjustment model of thermoluminescence experimental data; Modelo de ajuste de datos experimentales de termoluminiscencia

    Energy Technology Data Exchange (ETDEWEB)

    Moreno y Moreno, A. [Departamento de Apoyo en Ciencias Aplicadas, Benemerita Universidad Autonoma de Puebla, 4 Sur 104, Centro Historico, 72000 Puebla (Mexico); Moreno B, A. [Facultad de Ciencias Quimicas, UNAM, 04510 Mexico D.F. (Mexico)

    2002-07-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a{sub i}* exp (-1/b{sub i} * (T-C{sub i})) where: a{sub i}, b{sub i}, c{sub i} are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  10. Adjustment and Development of Health User’s Mental Model Completeness Scale in Search Engines

    Directory of Open Access Journals (Sweden)

    Maryam Nakhoda

    2016-10-01

    Full Text Available Introduction: Users’ performance and their interaction with information retrieval systems can be observed in development of their mental models. Users, especially users of health, use mental models to facilitate their interactions with these systems and incomplete or incorrect models can cause problems for them . The aim of this study was the adjustment and development of health user’s mental model completeness scale in search engines. Method: This quantitative study uses Delphi method. Among various scales for users’ mental model completeness, Li’s scale was selected and some items were added to this scale based on previous valid literature. Delphi panel members were selected using purposeful sampling method, consisting of 20 and 18 participants in the first and second rounds, respectively. Kendall’s Coefficient of Concordance in SPSS version 16 was used as basis for agreement (95% confidence. Results:The Kendall coefficient of Concordance (W was calculated to be 0.261(P-value<0.001 for the first and 0.336 (P-value<0.001 for the second round. Therefore, the study was found to be statistically significant with 95% confidence. Since the increase in the coefficient in two consecutive rounds was very little (equal to 0.075, surveying the panel members were stopped based on second Schmidt criterion and Delphi method was stopped after the second round. Finally, the dimensions of Li’s scale (existence and nature, search characteristics and levels of interaction were confirmed again, but “indexing of pages or websites” was eliminated and “Difference between results of different search engines”, “possibility of access to similar or related webpages”, and “possibility of search for special formats and multimedia” were added to Li’s scale. Conclusion: In this study, the scale for mental model completeness of health users was adjusted and developed; it can help the designers of information retrieval systems in systematic

  11. Adjusting kinematics and kinetics in a feedback-controlled toe walking model

    Directory of Open Access Journals (Sweden)

    Olenšek Andrej

    2012-08-01

    Full Text Available Abstract Background In clinical gait assessment, the correct interpretation of gait kinematics and kinetics has a decisive impact on the success of the therapeutic programme. Due to the vast amount of information from which primary anomalies should be identified and separated from secondary compensatory changes, as well as the biomechanical complexity and redundancy of the human locomotion system, this task is considerably challenging and requires the attention of an experienced interdisciplinary team of experts. The ongoing research in the field of biomechanics suggests that mathematical modeling may facilitate this task. This paper explores the possibility of generating a family of toe walking gait patterns by systematically changing selected parameters of a feedback-controlled model. Methods From the selected clinical case of toe walking we identified typical toe walking characteristics and encoded them as a set of gait-oriented control objectives to be achieved in a feedback-controlled walking model. They were defined as fourth order polynomials and imposed via feedback control at the within-step control level. At the between-step control level, stance leg lengthening velocity at the end of the single support phase was adaptively adjusted after each step so as to facilitate gait velocity control. Each time the gait velocity settled at the desired value, selected intra-step gait characteristics were modified by adjusting the polynomials so as to mimic the effect of a typical therapeutical intervention - inhibitory casting. Results By systematically adjusting the set of control parameters we were able to generate a family of gait kinematic and kinetic patterns that exhibit similar principal toe walking characteristics, as they were recorded by means of an instrumented gait analysis system in the selected clinical case of toe walking. We further acknowledge that they to some extent follow similar improvement tendencies as those which one can

  12. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    Directory of Open Access Journals (Sweden)

    Yanfei Kou

    2016-01-01

    Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.

  13. Evaluating changes in matrix-based, recovery-adjusted concentrations in paired data for pesticides in groundwater.

    Science.gov (United States)

    Zimmerman, Tammy M; Breen, Kevin J

    2012-01-01

    Pesticide concentration data for waters from selected carbonate-rock aquifers in agricultural areas of Pennsylvania were collected in 1993-2009 for occurrence and distribution assessments. A set of 30 wells was visited once in 1993-1995 and again in 2008-2009 to assess concentration changes. The data include censored matched pairs (nondetections of a compound in one or both samples of a pair). A potentially improved approach for assessing concentration changes is presented where (i) concentrations are adjusted with models of matrix-spike recovery and (ii) area-wide temporal change is tested by use of the paired Prentice-Wilcoxon (PPW) statistical test. The PPW results for atrazine, simazine, metolachlor, prometon, and an atrazine degradate, deethylatrazine (DEA), are compared using recovery-adjusted and unadjusted concentrations. Results for adjusted compared with unadjusted concentrations in 2008-2009 compared with 1993-1995 were similar for atrazine and simazine (significant decrease; 95% confidence level) and metolachlor (no change) but differed for DEA (adjusted, decrease; unadjusted, increase) and prometon (adjusted, decrease; unadjusted, no change). The PPW results were different on recovery-adjusted compared with unadjusted concentrations. Not accounting for variability in recovery can mask a true change, misidentify a change when no true change exists, or assign a direction opposite of the true change in concentration that resulted from matrix influences on extraction and laboratory method performance. However, matrix-based models of recovery derived from a laboratory performance dataset from multiple studies for national assessment, as used herein, rather than time- and study-specific recoveries may introduce uncertainty in recovery adjustments for individual samples that should be considered in assessing change.

  14. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    Science.gov (United States)

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  15. A Model of Divorce Adjustment for Use in Family Service Agencies.

    Science.gov (United States)

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  16. Modeling Quality-Adjusted Life Expectancy Loss Resulting from Tobacco Use in the United States

    Science.gov (United States)

    Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.

    2007-01-01

    Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…

  17. Controller Design and Analysis of Spacecraft Automatic Levelling and Equalizing Hoist Device based on Hanging Point Adjustment

    Directory of Open Access Journals (Sweden)

    Tang Laiying

    2016-01-01

    Full Text Available Spacecraft Automatic Levelling and Equalizing Hoist Device (SALEHD is a kind of hoisting device developed for eccentric spacecraft level-adjusting, based on hanging point adjustment by utilizing XY-workbench. To make the device automatically adjust the spacecraft to be levelling, the controller for SALEHD was designed in this paper. Through geometry and mechanics analysis for SALEHD and the spacecraft, the mathematical model of the controller is established. And then, the link of adaptive control and the link of variable structure control were added into the controller to adapt the unknown parameter and eliminate the interference of support vehicle. The stability of the controller was analysed, through constructing Lyapunov energy function. It was proved that the controller system is asymptotically stable, and converged to origin that is equilibrium point. So the controller can be applied in SALEHD availably and safely.

  18. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    Science.gov (United States)

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  19. Based on the C # B/S Model of Online Curriculum Adjustment System Design and Implementation%基于C#的B/S模式网上调课系统设计与实现

    Institute of Scientific and Technical Information of China (English)

    官虎; 谢艳新; 彭晓峰; 张志阳; 张旭; 王胜

    2012-01-01

    With the rapid development of computer technology, computer applied to modem educational management on the increasingly high demand. While modem educational adjustable class variability is relatively large, as in the classes that appear in the process of classroom time conflict, designed mainly to solve the adjustable string class conflict.%随着计算机技术的快速发展,计算机应用于现代教务管理上要求越来越高。而现代高校的教务中调课可变性也比较大,如在调课过程中出现教室时间等冲突,本设计主要解决调串课中的冲突。

  20. Performance Evaluation of Electronic Inductor-Based Adjustable Speed Drives with Respect to Line Current Interharmonics

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz;

    2017-01-01

    Electronic Inductor (EI)-based front-end rectifiers have a large potential to become the prominent next generation of Active Front End (AFE) topology used in many applications including Adjustable Speed Drives (ASDs) for systems having unidirectional power flow. The EI-based ASD is mostly attract...

  1. Conference Innovations in Derivatives Market : Fixed Income Modeling, Valuation Adjustments, Risk Management, and Regulation

    CERN Document Server

    Grbac, Zorana; Scherer, Matthias; Zagst, Rudi

    2016-01-01

    This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...

  2. Development of a Risk-adjustment Model for the Inpatient Rehabilitation Facility Discharge Self-care Functional Status Quality Measure.

    Science.gov (United States)

    Deutsch, Anne; Pardasaney, Poonam; Iriondo-Perez, Jeniffer; Ingber, Melvin J; Porter, Kristie A; McMullen, Tara

    2017-07-01

    Functional status measures are important patient-centered indicators of inpatient rehabilitation facility (IRF) quality of care. We developed a risk-adjusted self-care functional status measure for the IRF Quality Reporting Program. This paper describes the development and performance of the measure's risk-adjustment model. Our sample included IRF Medicare fee-for-service patients from the Centers for Medicare & Medicaid Services' 2008-2010 Post-Acute Care Payment Reform Demonstration. Data sources included the Continuity Assessment Record and Evaluation Item Set, IRF-Patient Assessment Instrument, and Medicare claims. Self-care scores were based on 7 Continuity Assessment Record and Evaluation items. The model was developed using discharge self-care score as the dependent variable, and generalized linear modeling with generalized estimation equation to account for patient characteristics and clustering within IRFs. Patient demographics, clinical characteristics at IRF admission, and clinical characteristics related to the recent hospitalization were tested as risk adjusters. A total of 4769 patient stays from 38 IRFs were included. Approximately 57% of the sample was female; 38.4%, 75-84 years; and 31.0%, 65-74 years. The final model, containing 77 risk adjusters, explained 53.7% of variance in discharge self-care scores (P<0.0001). Admission self-care function was the strongest predictor, followed by admission cognitive function and IRF primary diagnosis group. The range of expected and observed scores overlapped very well, with little bias across the range of predicted self-care functioning. Our risk-adjustment model demonstrated strong validity for predicting discharge self-care scores. Although the model needs validation with national data, it represents an important first step in evaluation of IRF functional outcomes.

  3. Development of a GIA (Glacial Isostatic Adjustment) - Fault Model of Greenland

    Science.gov (United States)

    Steffen, R.; Lund, B.

    2015-12-01

    The increase in sea level due to climate change is an intensely discussed phenomenon, while less attention is being paid to the change in earthquake activity that may accompany disappearing ice masses. The melting of the Greenland Ice Sheet, for example, induces changes in the crustal stress field, which could result in the activation of existing faults and the generation of destructive earthquakes. Such glacially induced earthquakes are known to have occurred in Fennoscandia 10,000 years ago. Within a new project ("Glacially induced earthquakes in Greenland", start in October 2015), we will analyse the potential for glacially induced earthquakes in Greenland due to the ongoing melting. The objectives include the development of a three-dimensional (3D) subsurface model of Greenland, which is based on geologic, geophysical and geodetic datasets, and which also fulfils the boundary conditions of glacial isostatic adjustment (GIA) modelling. Here we will present an overview of the project, including the most recently available datasets and the methodologies needed for model construction and the simulation of GIA induced earthquakes.

  4. Executive function and psychosocial adjustment in healthy children and adolescents: A latent variable modelling investigation.

    Science.gov (United States)

    Cassidy, Adam R

    2016-01-01

    The objective of this study was to establish latent executive function (EF) and psychosocial adjustment factor structure, to examine associations between EF and psychosocial adjustment, and to explore potential development differences in EF-psychosocial adjustment associations in healthy children and adolescents. Using data from the multisite National Institutes of Health (NIH) magnetic resonance imaging (MRI) Study of Normal Brain Development, the current investigation examined latent associations between theoretically and empirically derived EF factors and emotional and behavioral adjustment measures in a large, nationally representative sample of children and adolescents (7-18 years old; N = 352). Confirmatory factor analysis (CFA) was the primary method of data analysis. CFA results revealed that, in the whole sample, the proposed five-factor model (Working Memory, Shifting, Verbal Fluency, Externalizing, and Internalizing) provided a close fit to the data, χ(2)(66) = 114.48, p psychosocial adjustment associations. Findings indicate that childhood EF skills are best conceptualized as a constellation of interconnected yet distinguishable cognitive self-regulatory skills. Individual differences in certain domains of EF track meaningfully and in expected directions with emotional and behavioral adjustment indices. Externalizing behaviors, in particular, are associated with latent Working Memory and Verbal Fluency factors.

  5. A comparison of administrative and physiologic predictive models in determining risk adjusted mortality rates in critically ill patients.

    Directory of Open Access Journals (Sweden)

    Kyle B Enfield

    Full Text Available BACKGROUND: Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. METHODS: We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. RESULTS: We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8% and 24.6% (95% CI 22.7%, 26.5% (t-test p-value<0.001. The r(2 for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. CONCLUSIONS: In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results

  6. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    Science.gov (United States)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  7. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    's dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude......We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup...

  8. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    Directory of Open Access Journals (Sweden)

    H. Lee

    2012-01-01

    Full Text Available State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA and kinematic-wave routing models of the US National Weather Service (NWS Research Distributed Hydrologic Model (RDHM with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation

  9. Dynamic Adjustment of Asset - liability Ratio of Manufacturing Listed Companies in China :An Aanlysis based on Partial Adjustment Model and Dynamic Panel Data%我国制造业上市公司资产负债率的动态调整——基于部分调整模型和动态面板数据的分析

    Institute of Scientific and Technical Information of China (English)

    王亮

    2012-01-01

    从动态角度研究了我国制造业上市公司的资产负债率调整行为,对我国285家制造业上市公司在2001-2008年样本区间内的动态面板数据模型进行了系统GMM估计。结果表明,我国制造业上市公司的资产流动性、非负债类税盾、成长性和净利润率与资产负债率显著负相关,而公司规模和资产结构与资产负债率显著正相关,宏观经济状况的变化也对我国制造业上市公司的资产负债率有显著影响;相比静态模型,动态模型具有更强的解释能力,我国制造业上市公司资产负债率向其目标最优值是调整动态的。%The paper studies on adjustment behavior of asset -liability ratio of manufacturing listed companies in China from a dynamic perspective. Based on the panel data of 285 manufacturing listed companies in China from 2001 to 2008, this paper conducts system GMM methods to estimate. Results show that asset liquidity, non -debt tax shield, growth and net profit are negatively correlated with asset - liability ratio of manufacturing listed compa- nies in China, company size and asset structure are positively correlated with the asset -liability ratio, macroeco- nomic situation also has a has stronger explanation. ment to the optimal value. significant impact on the asset -liability ratio; compared to static model, dynamic model Asset- liability ratio of manufacturing listed companies in China make dynamic adjust-

  10. 基于AMESim的某型涡轴发动机燃油调节器建模仿真%A Turboshaft Engine Adjuster Modeling Simulation Based on the AMESim

    Institute of Scientific and Technical Information of China (English)

    傅强

    2013-01-01

    涡轴发动机是直升机的动力装置,机械液压调节器是发动机控制系统的重要组成部分.以某型在役航空涡轴发动机机械液压调节器为研究对象,以机械液压系统建模仿真软件AMESim为研究平台,建立了该型调节器的AMESim模型,并进行了仿真研究.首先详细分析了调节器的组成及基本工作原理;其次,根据元部件的结构和流量连续及力平衡的原理,建立了该调节器的数学模型;最后,按照该调节器的调试流程对搭建的调节器模型进行仿真调试.通过时实际的试车数据对比表面,所建调节器的AMESim数学模型性达到了该调节器各项性能技术指标.所建立的调节器模型和得到的仿真结果不仅可以为该型调节器的调试过程提供参考,也可以在此基础上对该型调节器进行参数优化以提高或改进其性能.%Turbine shaft engine is the power unit of helicopter, and mechanical hydraulic pressure regulator is an importc part of engine control system. Here, a type of in-service turbo shaft engine mechanical hydraulic regulator was used as t research object, using mechanical hydraulic system modeling and simulation software AMESim as research platfon establishing the regulator of AMESim model, and simulation. The first detailedly analyzing the regulator composition and bat principle; secondly, according to the components of the structure and flow of continuous and force balance principle, it h established the regulator mathematical model; finally, according to the regulator it built debug flow regulator model simulati debugging. Through the actual test data comparison surface, the regulator of the AMESim mathematical model was used achieve the various technical performance indexes of the regulator. The regulator model and the simulation results can not or, provide the regulator of the debugging process to provide reference, but also can do parameter optimization in order enhance or improve its

  11. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    Science.gov (United States)

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  12. Finite element model for arch bridge vibration dynamics considering effect of suspender length adjustment on geometry stiffness matrix

    Institute of Scientific and Technical Information of China (English)

    ZHONG Yi-feng; WANG Rui; YING Xue-gang; CHEN Huai

    2006-01-01

    In this paper, we established a finite element (FEM) model to analyze the dynamic characteristics of arch bridges. In this model, the effects of adjustment to the length of a suspender on its geometry stiffness matrix are stressed. The FEM equations of mechanics characteristics, natural frequency and main mode are set up based on the first order matrix perturbation theory. Applicantion of the proposed model to analyze a real arch bridge proved the improvement in the simulation precision of dynamical characteristics of the arch bridge by considering the effects of suspender length variation.

  13. Hierarchical 3D mechanical parts matching based-on adjustable geometry and topology similarity measurements

    Institute of Scientific and Technical Information of China (English)

    马嵩华; 田凌

    2014-01-01

    A hierarchical scheme of feature-based model similarity measurement was proposed, named CSG_D2, in which both geometry similarity and topology similarity were applied. The features of 3D mechanical part were constructed by a series of primitive features with tree structure, as a form of constructive solid geometry (CSG) tree. The D2 shape distributions of these features were extracted for geometry similarity measurement, and the pose vector and non-disappeared proportion of each leaf node were gained for topology similarity measurement. Based on these, the dissimilarity between the query and the candidate was accessed by level-by-level CSG tree comparisons. With the adjustable weights, our scheme satisfies different comparison emphasis on the geometry or topology similarity. The assessment results from CSG_D2 demonstrate more discriminative than those from D2 in the analysis of precision-recall and similarity matrix. Finally, an experimental search engine is applied for mechanical parts reuse by using CSG_D2, which is convenient for the mechanical design process.

  14. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Science.gov (United States)

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  15. Designing a model to improve first year student adjustment to university

    Directory of Open Access Journals (Sweden)

    Nasrin Nikfal Azar

    2014-05-01

    Full Text Available The increase in the number of universities for the last decade in Iran increases the need for higher education institutions to manage their enrollment, more effectively. The purpose of this study is to design a model to improve the first year university student adjustment by examining the effects of academic self-efficacy, academic motivation, satisfaction, high school GPA and demographic variables on student’s adjustment to university. The study selects a sample of 357 students out of 4585 bachelor first year student who were enrolled in different programs. Three questionnaires were used for collection of data for this study, namely academic self-efficacy, academic motivation and student satisfaction with university. Structural equation modeling was employed using AMOS version7.16 to test the adequacy of the hypothesized model. Inclusion of additional relationship in the initial model improved the goodness indices considerably. The results suggest that academic self-efficacy were related positively to adjustment, both directly (B=0.35 and indirectly through student satisfaction (B=0.14 and academic motivation (B=0.9. The results indicate a need to develop programs that effectively promote the self-efficacy of first year student of student to increase college adjustment and consequently retention rate.

  16. A Study of Perfectionism, Attachment, and College Student Adjustment: Testing Mediational Models.

    Science.gov (United States)

    Hood, Camille A.; Kubal, Anne E.; Pfaller, Joan; Rice, Kenneth G.

    Mediational models predicting college students' adjustment were tested using regression analyses. Contemporary adult attachment theory was employed to explore the cognitive/affective mechanisms by which adult attachment and perfectionism affect various aspects of psychological functioning. Consistent with theoretical expectations, results…

  17. Evaluation and Selection of the Optimal Scheme of Industrial Structure Adjustment Based on DEA

    Institute of Scientific and Technical Information of China (English)

    FU Lifang; GE Jiaqi; MENG Jun

    2006-01-01

    In the paper, the advanced assessment method of DEA (Date Envelopment Analysis) had been used to evaluate relative efficiency and select the optimal scheme of agricultural industrial structure adjustment. According to the results of DEA models, we analyzed scale benefits of every optional schemes, probed deeply the ultimate reason for not DEA efficient, which clarified the method and approach to improve these optional schemes. Finally, a new method had been proposed to rank and select the optimal scheme. The research is significant to direct the practice of the adjustment of agricultural industrial structure.

  18. Dimensionality of the Chinese Dyadic Adjustment Scale Based on Confirmatory Factor Analyses

    Science.gov (United States)

    Shek, Daniel T. L.; Cheung, C. K.

    2008-01-01

    Based on the responses of 1,501 Chinese married adults to the Chinese version of the Dyadic Adjustment Scale (C-DAS), confirmatory factor analyses showed that four factors were abstracted from the C-DAS (Dyadic Consensus, Dyadic Cohesion, Dyadic Satisfaction and Affectional Expression) and these four primary factors were subsumed under a…

  19. Assessment on Different Schemes of Industrial Structure Adjustment Based on DEA

    Institute of Scientific and Technical Information of China (English)

    FU Li-fang; MENG Jun

    2004-01-01

    DEA is a new research field in operations research. It has unique virtues in dealing with assessment problem with multi-inputs especially multi-outputs, In this paper, DEA model of Dεc2R has been applied to evaluate the relative efficiency of several different schemes of industrial structure adjustment of agriculture and finally select the optimal scheme. Furthermore, the inferior scheme has been improved according to some useful insights got from DEA model.

  20. Use of generalised Procrustes analysis for the photogrammetric block adjustment by independent models

    Science.gov (United States)

    Crosilla, Fabio; Beinat, Alberto

    The paper reviews at first some aspects of the generalised Procrustes analysis (GP) and outlines the analogies with the block adjustment by independent models. On this basis, an innovative solution of the block adjustment problem by Procrustes algorithms and the related computer program implementation are presented and discussed. The main advantage of the new proposed method is that it avoids the conventional least squares solution. For this reason, linearisation algorithms and the knowledge of a priori approximate values for the unknown parameters are not required. Once the model coordinates of the tie points are available and at least three control points are known, the Procrustes algorithms can directly provide, without further information, the tie point ground coordinates and the exterior orientation parameters. Furthermore, some numerical block adjustment solutions obtained by the new method in different areas of North Italy are compared to the conventional solution. The very simple data input process, the less memory requirements, the low computing time and the same level of accuracy that characterise the new algorithm with respect to a conventional one are verified with these tests. A block adjustment of 11 models, with 44 tie points and 14 control points, takes just a few seconds on an Intel PIII 400 MHz computer, and the total data memory required is less than twice the allocated space for the input data. This is because most of the computations are carried out on data matrices of limited size, typically 3×3.

  1. The Prediction of Social Adjustment Based on Emotional and Spiritual Intelligence

    Directory of Open Access Journals (Sweden)

    Kazem Geram

    2016-08-01

    Full Text Available Based on emotional and spiritual intelligence of male teachers in Arak City, this correlational study attempts to shed light upon the prediction of social adjustment. The statistical population consists of all 650 male teachers of high schools in Arak city. The sampling method was conducted through cluster random sampling among 300 participants. Three instruments of Social Adjustment Scale (SAS, BarOn Questionnaire of Emotional Intelligence (EQ-i and spiritual intelligence questionnaire by Badii et al., are used. To investigate the hypotheses, we used Pearson Correlation Coefficient and to homogeneity of variance, the Levene's test was utilized. By SPSS Software through Pearson correlation analysis, the collected data were analyzed. The results show that the relation of social adjustment with emotional intelligence and spiritual intelligence is significant.

  2. An adjustable flow restrictor for implantable infusion pumps based on porous ceramics.

    Science.gov (United States)

    Jannsen, Holger; Klein, Stephan; Nestler, Bodo

    2015-08-01

    This paper describes an adjustable flow restrictor for use in gas-driven implantable infusion pumps, which is based on the resistance of a flow through a porous ceramic material. The flow inside the walls of a ceramic tube can be adjusted between 270 nl/min and 1260 nl/min by changing the flow path length in the ceramic over a distance of 14 mm. The long-term stability of the flow restrictor has been analyzed. A drift of -8% from the nominal value was observed, which lies within the required tolerance of ±10% after 30 days. The average time needed to change the flow rate is 40 s. In addition, the maximum adjustment time was 110 s, which also lies within the specification.

  3. An adjustable focusing system for a 2 MeV H- ion beam line based on permanent magnet quadrupoles

    CERN Document Server

    Nirkko, M; Ereditato, A; Kreslo, I; Scampoli, P; Weber, M

    2012-01-01

    A compact adjustable focusing system for a 2 MeV H- RFQ Linac is designed, constructed and tested based on four permanent magnet quadrupoles (PMQ). A PMQ model is realised using finite element simulations, providing an integrated field gradient of 2.35 T with a maximal field gradient of 57 T/m. A prototype is constructed and the magnetic field is measured, demonstrating good agreement with the simulation. Particle track simulations provide initial values for the quadrupole positions. Accordingly, four PMQs are constructed and assembled on the beam line, their positions are then tuned to obtain a minimal beam spot size of (1.2 x 2.2) mm^2 on target. This paper describes an adjustable PMQ beam line for an external ion beam. The novel compact design based on commercially available NdFeB magnets allows high flexibility for ion beam applications.

  4. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  5. Modelling the rate of change in a longitudinal study with missing data, adjusting for contact attempts.

    Science.gov (United States)

    Akacha, Mouna; Hutton, Jane L

    2011-05-10

    The Collaborative Ankle Support Trial (CAST) is a longitudinal trial of treatments for severe ankle sprains in which interest lies in the rate of improvement, the effectiveness of reminders and potentially informative missingness. A model is proposed for continuous longitudinal data with non-ignorable or informative missingness, taking into account the nature of attempts made to contact initial non-responders. The model combines a non-linear mixed model for the outcome model with logistic regression models for the reminder processes. A sensitivity analysis is used to contrast this model with the traditional selection model, where we adjust for missingness by modelling the missingness process. The conclusions that recovery is slower, and less satisfactory with age and more rapid with below knee cast than with a tubular bandage do not alter materially across all models investigated. The results also suggest that phone calls are most effective in retrieving questionnaires.

  6. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  7. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  8. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  9. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    Science.gov (United States)

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  10. Adjusting thresholds of satellite-based convective initiation interest fields based on the cloud environment

    Science.gov (United States)

    Jewett, Christopher P.; Mecikalski, John R.

    2013-11-01

    The Time-Space Exchangeability (TSE) concept states that similar characteristics of a given property are closely related statistically for objects or features within close proximity. In this exercise, the objects considered are growing cumulus clouds, and the data sets to be considered in a statistical sense are geostationary satellite infrared (IR) fields that help describe cloud growth rates, cloud top heights, and whether cloud tops contain significant amounts of frozen hydrometeors. In this exercise, the TSE concept is applied to alter otherwise static thresholds of IR fields of interest used within a satellite-based convective initiation (CI) nowcasting algorithm. The convective environment in which the clouds develop dictate growth rate and precipitation processes, and cumuli growing within similar mesoscale environments should have similar growth characteristics. Using environmental information provided by regional statistics of the interest fields, the thresholds are examined for adjustment toward improving the accuracy of 0-1 h CI nowcasts. Growing cumulus clouds are observed within a CI algorithm through IR fields for many 1000 s of cumulus cloud objects, from which statistics are generated on mesoscales. Initial results show a reduction in the number of false alarms of ~50%, yet at the cost of eliminating approximately ~20% of the correct CI forecasts. For comparison, static thresholds (i.e., with the same threshold values applied across the entire satellite domain) within the CI algorithm often produce a relatively high probability of detection, with false alarms being a significant problem. In addition to increased algorithm performance, a benefit of using a method like TSE is that a variety of unknown variables that influence cumulus cloud growth can be accounted for without need for explicit near-cloud observations that can be difficult to obtain.

  11. Calibration and adjustment of center of mass (COM) based on EKF during in-flight phase

    Institute of Scientific and Technical Information of China (English)

    DONG Feng; LIAO He; JIA ChengLong; XIA XiaoJing

    2009-01-01

    The electrostatic accelerometer, assembled on gravity satellite, serves to measure all non-gravitational accelerations caused by atmosphere drag or solar radiation pressure, etc. The proof-mass center of the accelerometer needs to be precisely positioned at the center of gravity satellite, otherwise, the offset between them will bring measurement disturbance due to angular acceleration of satellite and gradient.Because of installation and measurement errors on the ground, fuel consumption during the in-flight phase and other adverse factors, the offset between the proof-mass center and the satellite center of mass is usually large enough to affect the measurement accuracy of the accelerometer, even beyond its range. Therefore, the offset needs to be measured or estimated, and then be controlled within the measurement requirement of the accelerometer by the center of mass (COM) adjustment mechanism during the life of the satellite. The estimation algorithm based on EKF, which uses the measurement of accelerometer, gyro and magnetometer, is put forward to estimate the offset, and the COM adjustment mechanism then adjusts the satellite center of mass in order to make the offset meet the requirement.With the special configuration layout, the COM adjustment mechanism driven by the stepper motors can separately regulate X, Y and Z axes. The associated simulation shows that the offset can be con-trolled better than 0.03 mm for all the axes with the method mentioned above.

  12. Controlling chaos using Takagi-Sugeno fuzzy model and adaptive adjustment

    Institute of Scientific and Technical Information of China (English)

    Zheng Yong-Ai

    2006-01-01

    In this paper, an approach to the control of continuous-time chaotic systems is proposed using the Takagi-Sugeno (TS) fuzzy model and adaptive adjustment. Sufficient conditions are derived to guarantee chaos control from Lyapunov stability theory. The proposed approach offers a systematic design procedure for stabilizing a large class of chaotic systems in the literature about chaos research. The simulation results on R(o)ssler's system verify the effectiveness of the proposed methods.

  13. Solution model of nonlinear integral adjustment including different kinds of observing data with different precisions

    Institute of Scientific and Technical Information of China (English)

    郭金运; 陶华学

    2003-01-01

    In order to process different kinds of observing data with different precisions, a new solution model of nonlinear dynamic integral least squares adjustment was put forward, which is not dependent on their derivatives. The partial derivative of each component in the target function is not computed while iteratively solving the problem. Especially when the nonlinear target function is more complex and very difficult to solve the problem, the method can greatly reduce the computing load.

  14. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Carvajal-Gamez

    2012-09-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  15. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    B.E. Carvajal-Gámez

    2012-08-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  16. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  17. Adjusting for unmeasured confounding due to either of two crossed factors with a logistic regression model.

    Science.gov (United States)

    Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar

    2016-08-15

    Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.

  18. [Applying temporally-adjusted land use regression models to estimate ambient air pollution exposure during pregnancy].

    Science.gov (United States)

    Zhang, Y J; Xue, F X; Bai, Z P

    2017-03-06

    The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.

  19. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Science.gov (United States)

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  20. Improved Water Network Macroscopic Model Utilising Auto-Control Adjusting Valve by PLS

    Institute of Scientific and Technical Information of China (English)

    LI Xia; ZHAO Xinhua; WANG Xiaodong

    2005-01-01

    In order to overcome the low precision and weak applicability problems of the current municipal water network state simulation model, the water network structure is studied. Since the telemetry system has been applied increasingly in the water network, and in order to reflect the network operational condition more accurately, a new water network macroscopic model is developed by taking the auto-control adjusting valve opening state into consideration. Then for highly correlated or collinear independent variables in the model, the partial least squares (PLS) regression method provides a model solution which can distinguish between the system information and the noisy data. Finally, a hypothetical water network is introduced for validating the model. The simulation results show that the relative error is less than 5.2%, indicating that the model is efficient and feasible, and has better generalization performance.

  1. Research on the Optimal Capital Level and Adjustment of China's Commercial Banks:An Analysis Based on the Partial Adjustment Model and Fourier Unit Root Tests%中国商业银行最优资本水平及其调整研究基于部分调整模型和傅里叶单位根检验的分析

    Institute of Scientific and Technical Information of China (English)

    沈沛龙; 王晓婷

    2016-01-01

    采用部分调整模型和傅里叶单位根检验对中国14家上市商业银行的最优资本水平进行研究,并估计出存在最优资本水平银行的最优资本比率值和资本调整速度。研究发现,大部分上市银行均存在最优资本水平,但不同类型和资产规模的银行在最优资本比率的目标变量选择方面有所差异。平均而言,大型商业银行最优资本水平较高,股份制银行最优资本水平较低。资本调整速度在银行间差异很大,自有资金比率调整速度最快,核心资本充足率和资本充足率调整速度较慢。%This paper studies the optimal capital level of fourteen Chinese listed commercial banks using partial adjustment model and Fourier unit root test,and estimates the optimal capital ratio and capital value adjustment speed of the banks which exist optimal capital level.The study finds that there exists an optimal capital level among most of the listed banks,but the target vari-ables of optimal capital ratios are different as the types and sizes of bank assets vary.On average, the optimal capital levels of the large commercial banks are higher,and the optimal capital levels of the joint-stock banks are lower.The capital adjustment rates vary widely among banks,the own funds ratios adjustment speed are fastest,the core capital adequacy ratios and capital adequa-cy ratios adjust more slowly.

  2. [The motive force of evolution based on the principle of organismal adjustment evolution.].

    Science.gov (United States)

    Cao, Jia-Shu

    2010-08-01

    From the analysis of the existing problems of the prevalent theories of evolution, this paper discussed the motive force of evolution based on the knowledge of the principle of organismal adjustment evolution to get a new understanding of the evolution mechanism. In the guide of Schrodinger's theory - "life feeds on negative entropy", the author proposed that "negative entropy flow" actually includes material flow, energy flow and information flow, and the "negative entropy flow" is the motive force for living and development. By modifying my own theory of principle of organismal adjustment evolution (not adaptation evolution), a new theory of "regulation system of organismal adjustment evolution involved in DNA, RNA and protein interacting with environment" is proposed. According to the view that phylogenetic development is the "integral" of individual development, the difference of negative entropy flow between organisms and environment is considered to be a motive force for evolution, which is a new understanding of the mechanism of evolution. Based on such understanding, evolution is regarded as "a changing process that one subsystem passes all or part of its genetic information to the next generation in a larger system, and during the adaptation process produces some new elements, stops some old ones, and thereby lasts in the larger system". Some other controversial questions related to evolution are also discussed.

  3. Optimal fuzzy PID controller with adjustable factors based on flexible polyhedron search algorithm

    Institute of Scientific and Technical Information of China (English)

    谭冠政; 肖宏峰; 王越超

    2002-01-01

    A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on-line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors xp, xi, and xd are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.

  4. Model-based Predictive Adjust of Shopping Mall lce Storage Central Air Conditioning%基于模型预测调节的西安某商场冰蓄冷空调

    Institute of Scientific and Technical Information of China (English)

    郭凯

    2015-01-01

    This paper presents the optimization scheme of cooling capacity depend on time-of-use electricity for SAGA shopping mal in Xi'an.By applying system identification techniques,a simplified linear thermal model for the building was de-rived from a detailed building simulation previously developed in TRNSYS.Then clarify the influence degree of various factors on the indoor temperature and calculate the indoor cooling power requirements.Taking advantage oflinear goal programming to optimize the whole day's cooling power for SAGA shopping mal final y.%针对冰蓄冷空调的特点及工作原理,结合西安地区夏季气候特征,基于模型预测调节,提出了西安赛格购物中心夏季供冷依靠分时电价的优化方案。通过TRNSYS瞬态能耗模拟软件对赛格某层的建筑细节模拟,利用系统辨识技术将TRNSYS的数据进行处理,从而建立简化的线性热工模型,明确各因素对室内温度的影响程度并计算出室内冷量需求,最后基于线性目标规划,对赛格一天冷量的使用进行优化,达到了在节能与节省用电费用方面的明显的效果。

  5. [An adjustment procedure for comparing migration data based on different definitions in Japanese censuses].

    Science.gov (United States)

    Ishikawa, Y; Inoue, T; Matsunaka, R

    1998-11-01

    "Change of migration definition in [the] 1990 census of Japan prevents us from comparing the migration data reported in it with those in [the] 1970 and 1980 censuses. However, the adjustment procedure we propose in this article enables us to directly compare them.... The observed migration data based on the 1970/80 definition and the estimated data based on the 1990 definition are compared for the period of 1965-70 and 1975-80. Furthermore, temporal changes of age-specific inter-regional migration size during the period between 1965-70 and 1985-90 are explained." (EXCERPT)

  6. Improving depth resolution of diffuse optical tomography with an exponential adjustment method based on maximum singular value of layered sensitivity

    Institute of Scientific and Technical Information of China (English)

    Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang

    2008-01-01

    The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.

  7. Error-preceding brain activity reflects (mal-)adaptive adjustments of cognitive control: a modeling study.

    Science.gov (United States)

    Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom

    2012-01-01

    Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.

  8. Performance Evaluation of Electronic Inductor-Based Adjustable Speed Drives with Respect to Line Current Interharmonics

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Blaabjerg, Frede

    2017-01-01

    Electronic Inductor (EI)-based front-end rectifiers have a large potential to become the prominent next generation of Active Front End (AFE) topology used in many applications including Adjustable Speed Drives (ASDs) for systems having unidirectional power flow. The EI-based ASD is mostly...... attractive due to its improved harmonic performance compared to a conventional ASD. In this digest, the input currents of the EI-based ASD are investigated and compared with the conventional ASDs with respect to interharmonics, which is an emerging power quality topic. First, the main causes...... of the interharmonic distortions in the ASD applications are analyzed under balanced and unbalanced load conditions. Thereafter, the key role of the EI at the DC stage is investigated in terms of high impedance and current harmonics transfer. Obtained experiments and simulations for both EI-based and conventional ASD...

  9. Simply Adjustable Sinusoidal Oscillator Based on Negative Three-Port Current Conveyors

    Directory of Open Access Journals (Sweden)

    R. Sotner

    2010-09-01

    Full Text Available The paper deals with sinusoidal oscillator employing two controlled second-generation negative-current conveyors and two capacitors. The proposed oscillator has a simple circuit configuration. Electronic (voltage adjusting of the oscillation frequency and condition of oscillation are possible. The presented circuit is verified in PSpice utilizing macro models of commercially available negative current conveyors. The circuit is also verified by experimental measurements. Important characteristics and drawbacks of the proposed circuit and influences of real active elements in the designed circuit are discussed in detail.

  10. Unbalanced Baseline in School-Based Interventions to Prevent Obesity: Adjustment Can Lead to Bias - a Systematic Review

    Directory of Open Access Journals (Sweden)

    Rosely Sichieri

    2014-06-01

    Full Text Available Background/Aims: Cluster designs favor unbalanced baseline measures. The aim of the present study was to determine the frequency of unbalanced baseline BMI on school-based randomized controlled trials (RCT aimed at obesity reduction and to evaluate the analysis strategies. We hypothesized that the adjustment of unbalanced baseline measures may explain the great discrepancy among studies. Methods: The source of data was the Medline database content from January 1995 until May 2012. Our search strategy combined key words related to school-based interventions with such related to weight and was not limited by language. The participants' ages were restricted to 6-18 years. Results: We identified 146 school-based studies on obesity prevention (or overweight or excessive weight change. Of the 146 studies, 36 were retained for the analysis after excluding reviews, feasibility studies, other outcomes, and repeated publications. 13 (35% of the reviewed studies had statistically significant (p Conclusion: Adjustment for the baseline BMI is frequently done in cluster randomized studies, and there is no standardization for this procedure. Thus, procedures that disentangle the effects of group, time and changes in time, such as mixed effects models, should be used as standard methods in school-based studies on the prevention of weight gain.

  11. Comparison of Satellite-based Basal and Adjusted Evapotranspiration for Several California Crops

    Science.gov (United States)

    Johnson, L.; Lund, C.; Melton, F. S.

    2013-12-01

    There is a continuing need to develop new sources of information on agricultural crop water consumption in the arid Western U.S. Pursuant to the California Water Conservation Act of 2009, for instance, the stakeholder community has developed a set of quantitative indicators involving measurement of evapotranspiration (ET) or crop consumptive use (Calif. Dept. Water Resources, 2012). Fraction of reference ET (or, crop coefficients) can be estimated from a biophysical description of the crop canopy involving green fractional cover (Fc) and height as per the FAO-56 practice standard of Allen et al. (1998). The current study involved 19 fields in California's San Joaquin Valley and Central Coast during 2011-12, growing a variety of specialty and commodity crops: lettuce, raisin, tomato, almond, melon, winegrape, garlic, peach, orange, cotton, corn and wheat. Most crops were on surface or subsurface drip, though micro-jet, sprinkler and flood were represented as well. Fc was retrospectively estimated every 8-16 days by optical satellite data and interpolated to a daily timestep. Crop height was derived as a capped linear function of Fc using published guideline maxima. These variables were used to generate daily basal crop coefficients (Kcb) per field through most or all of each respective growth cycle by the density coefficient approach of Allen & Pereira (2009). A soil water balance model for both topsoil and root zone, based on FAO-56 and using on-site measurements of applied irrigation and precipitation, was used to develop daily soil evaporation and crop water stress coefficients (Ke, Ks). Key meteorological variables (wind speed, relative humidity) were extracted from the California Irrigation Management Information System (CIMIS) for climate correction. Basal crop ET (ETcb) was then derived from Kcb using CIMIS reference ET. Adjusted crop ET (ETc_adj) was estimated by the dual coefficient approach involving Kcb, Ke, and incorporating Ks. Cumulative ETc

  12. Small-Sized Variable Stiffness Actuator Module Based on Adjustable Moment Arm

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Hongseon; Song, Jaebok [Korea Univ., Seoul (Korea, Republic of)

    2013-10-15

    In recent years, variable stiffness actuation has attracted much attention because interaction between a robot and the environment is increasingly required for various robot tasks. Several variable stiffness actuators (VSO) have been developed; however, they find limited applications owing to their size and weight. For realizing their widespread use, we developed a compact and lightweight mini-VSO. The mini-VSO consists of a control module based on an adjustable moment arm mechanism and a drive module with two motors. By controlling the relative motion of cams in the control module, the position and stiffness can be simultaneously controlled. Experimental results are presented to show its ability to change stiffness.

  13. Towards individualized dose constraints: Adjusting the QUANTEC radiation pneumonitis model for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.

    2014-01-01

    Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose...

  14. Return predictability and intertemporal asset allocation: Evidence from a bias-adjusted VAR model

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We extend the VAR based intertemporal asset allocation approach from Campbell et al. (2003) to the case where the VAR parameter estimates are adjusted for small- sample bias. We apply the analytical bias formula from Pope (1990) using both Campbell et al.'s dataset, and an extended dataset...... with quarterly data from 1952 to 2006. The results show that correcting the VAR parameters for small-sample bias has both quantitatively and qualitatively important e¤ects on the strategic intertemporal part of optimal portfolio choice, especially for bonds: for intermediate values of risk...

  15. LC Filter Design for Wide Band Gap Device Based Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede

    2014-01-01

    the LC filter with a higher cut off frequency and without damping resistors. The selection of inductance and capacitance is chosen based on capacitor voltage ripple and current ripple. The filter adds a base load to the inverter, which increases the inverter losses. It is shown how the modulation index......This paper presents a simple design procedure for LC filters used in wide band gap device based adjustable speed drives. Wide band gap devices offer fast turn-on and turn-off times, thus producing high dV/dt into the motor terminals. The high dV/dt can be harmful for the motor windings and bearings...... affects the capacitor capacitor and the inverter current....

  16. Infrared image gray adaptive adjusting enhancement algorithm based on gray redundancy histogram-dealing technique

    Science.gov (United States)

    Hao, Zi-long; Liu, Yong; Chen, Ruo-wang

    2016-11-01

    In view of the histogram equalizing algorithm to enhance image in digital image processing, an Infrared Image Gray adaptive adjusting Enhancement Algorithm Based on Gray Redundancy Histogram-dealing Technique is proposed. The algorithm is based on the determination of the entire image gray value, enhanced or lowered the image's overall gray value by increasing appropriate gray points, and then use gray-level redundancy HE method to compress the gray-scale of the image. The algorithm can enhance image detail information. Through MATLAB simulation, this paper compares the algorithm with the histogram equalization method and the algorithm based on gray redundancy histogram-dealing technique , and verifies the effectiveness of the algorithm.

  17. Bayesian hierarchical models combining different study types and adjusting for covariate imbalances: a simulation study to assess model performance.

    Directory of Open Access Journals (Sweden)

    C Elizabeth McCarron

    Full Text Available BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses.

  18. Mapping disability-adjusted life years: a Bayesian hierarchical model framework for burden of disease and injury assessment.

    Science.gov (United States)

    MacNab, Ying C

    2007-11-20

    This paper presents a Bayesian disability-adjusted life year (DALY) methodology for spatial and spatiotemporal analyses of disease and/or injury burden. A Bayesian disease mapping model framework, which blends together spatial modelling, shared-component modelling (SCM), temporal modelling, ecological modelling, and non-linear modelling, is developed for small-area DALY estimation and inference. In particular, we develop a model framework that enables SCM as well as multivariate CAR modelling of non-fatal and fatal disease or injury rates and facilitates spline smoothing for non-linear modelling of temporal rate and risk trends. Using British Columbia (Canada) hospital admission-separation data and vital statistics mortality data on non-fatal and fatal road traffic injuries to male population age 20-39 for year 1991-2000 and for 84 local health areas and 16 health service delivery areas, spatial and spatiotemporal estimation and inference on years of life lost due to premature death, years lived with disability, and DALYs are presented. Fully Bayesian estimation and inference, with Markov chain Monte Carlo implementation, are illustrated. We present a methodological framework within which the DALY and the Bayesian disease mapping methodologies interface and intersect. Its development brings the relative importance of premature mortality and disability into the assessment of community health and health needs in order to provide reliable information and evidence for community-based public health surveillance and evaluation, disease and injury prevention, and resource provision.

  19. Validation, replication, and sensitivity testing of Heckman-type selection models to adjust estimates of HIV prevalence.

    Directory of Open Access Journals (Sweden)

    Samuel J Clark

    Full Text Available A recent study using Heckman-type selection models to adjust for non-response in the Zambia 2007 Demographic and Health Survey (DHS found a large correction in HIV prevalence for males. We aim to validate this finding, replicate the adjustment approach in other DHSs, apply the adjustment approach in an external empirical context, and assess the robustness of the technique to different adjustment approaches. We used 6 DHSs, and an HIV prevalence study from rural South Africa to validate and replicate the adjustment approach. We also developed an alternative, systematic model of selection processes and applied it to all surveys. We decomposed corrections from both approaches into rate change and age-structure change components. We are able to reproduce the adjustment approach for the 2007 Zambia DHS and derive results comparable with the original findings. We are able to replicate applying the approach in several other DHSs. The approach also yields reasonable adjustments for a survey in rural South Africa. The technique is relatively robust to how the adjustment approach is specified. The Heckman selection model is a useful tool for assessing the possibility and extent of selection bias in HIV prevalence estimates from sample surveys.

  20. New Strategy for Congestion Control based on Dynamic Adjustment of Congestion Window

    Directory of Open Access Journals (Sweden)

    Gamal Attiya

    2012-03-01

    Full Text Available This paper presents a new mechanism for the end-to-end congestion control, called EnewReno. The proposed mechanism is based on the enhancement of both the congestion avoidance and the fast recovery algorithms of the TCP NewReno so as to improve its performance. The basic idea of the proposed mechanism is to adjust the congestion window of the TCP sender dynamically based on the level of congestion in the network so as to allow transferring more packets to the destination. The performance of the proposed mechanism is evaluated and compared with the most recent mechanisms by simulation studies using the well known Network Simulator NS-2 and the realistic topology generator GT-ITM.

  1. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  2. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Directory of Open Access Journals (Sweden)

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  3. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  4. Improvement for Speech Signal based on Post Wiener Filter and Adjustable Beam-Former

    Directory of Open Access Journals (Sweden)

    Xiaorong Tong

    2013-06-01

    Full Text Available In this study, a two-stage filter structure is introduced for speech enhancement. The first stage is an adjustable filter and sum beam-former with four-microphone array. The control of beam-forming filter is realized by adjusting only a single control variable. Different from the adaptive beam-forming filter, the proposed filter structure does not bring to any adaptive error noise, thus, it also does not bring the trouble to the second stage of the speech signal processing. The second stage of the proposed filter is a Wiener filter. The estimation of signal’s power spectrum for Wiener filter is realized by cross-correlation between primary outputs of two adjacent directional beams. This estimation is based on the assumption that the noise outputs of the two adjacent directional beams come from two independent noise source but the speech outputs come from the same speech source. The simulation results shown that the proposed algorithm can improve the Signal-Noise-Ratio (SNR about 6 dB.

  5. A Comparative Study of CAPM and Seven Factors Risk Adjusted Return Model

    Directory of Open Access Journals (Sweden)

    Madiha Riaz Bhatti

    2014-12-01

    Full Text Available This study is a comparison and contrast of the predictive powers of two asset pricing models: CAPM and seven factor risk-return adjusted model, to explain the cross section of stock rate of returns in the financial sector listed at Karachi Stock Exchange (KSE. To test the models daily returns from January 2013 to February 2014 have been taken and the excess returns of portfolios are regressed on explanatory variables. The results of the tested models indicate that the models are valid and applicable in the financial market of Pakistan during the period under study, as the intercepts are not significantly different from zero. It is consequently established from the findings that all the explanatory variables explain the stock returns in the financial sector of KSE. In addition, the results of this study show that addition of more explanatory variables to the single factor CAPM results in reasonably high values of R2. These results provide substantial support to fund managers, investors and financial analysts in making investment decisions.

  6. A statistical adjustment approach for climate projections of snow conditions in mountain regions using energy balance land surface models

    Science.gov (United States)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu

    2017-04-01

    Projections of future climate change have been increasingly called for lately, as the reality of climate change has been gradually accepted and societies and governments have started to plan upcoming mitigation and adaptation policies. In mountain regions such as the Alps or the Pyrenees, where winter tourism and hydropower production are large contributors to the regional revenue, particular attention is brought to current and future snow availability. The question of the vulnerability of mountain ecosystems as well as the occurrence of climate-related hazards such as avalanches and debris-flows is also under consideration. In order to generate projections of snow conditions, however, downscaling global climate models (GCMs) by using regional climate models (RCMs) is not sufficient to capture the fine-scale processes and thresholds at play. In particular, the altitudinal resolution matters, since the phase of precipitation is mainly controlled by the temperature which is altitude-dependent. Simulations from GCMs and RCMs moreover suffer from biases compared to local observations, due to their rather coarse spatial and altitudinal resolution, and often provide outputs at too coarse time resolution to drive impact models. RCM simulations must therefore be adjusted using empirical-statistical downscaling and error correction methods, before they can be used to drive specific models such as energy balance land surface models. In this study, time series of hourly temperature, precipitation, wind speed, humidity, and short- and longwave radiation were generated over the Pyrenees and the French Alps for the period 1950-2100, by using a new approach (named ADAMONT for ADjustment of RCM outputs to MOuNTain regions) based on quantile mapping applied to daily data, followed by time disaggregation accounting for weather patterns selection. We first introduce a thorough evaluation of the method using using model runs from the ALADIN RCM driven by a global reanalysis over the

  7. A data-driven model of present-day glacial isostatic adjustment in North America

    Science.gov (United States)

    Simon, Karen; Riva, Riccardo

    2016-04-01

    Geodetic measurements of gravity change and vertical land motion are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) via least-squares inversion. The result is an updated model of present-day GIA wherein the final predicted signal is informed by both observational data with realistic errors, and prior knowledge of GIA inferred from forward models. This method and other similar techniques have been implemented within a limited but growing number of GIA studies (e.g., Hill et al. 2010). The combination method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. Here, we show the results of using the combination approach to predict present-day rates of GIA in North America through the incorporation of both GPS-measured vertical land motion rates and GRACE-measured gravity observations into the prior model. In order to assess the influence of each dataset on the final GIA prediction, the vertical motion and gravimetry datasets are incorporated into the model first independently (i.e., one dataset only), then simultaneously. Because the a priori GIA model and its associated covariance are developed by averaging predictions from a suite of forward models that varies aspects of the Earth rheology and ice sheet history, the final GIA model is not independent of forward model predictions. However, we determine the sensitivity of the final model result to the prior GIA model information by using different representations of the input model covariance. We show that when both datasets are incorporated into the inversion, the final model adequately predicts available observational constraints, minimizes the uncertainty associated with the forward modelled GIA inputs, and includes a realistic estimation of the formal error associated with the GIA process. Along parts of the North American coastline, improved predictions of the long-term (kyr

  8. An amino acid substitution-selection model adjusts residue fitness to improve phylogenetic estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Susko, Edward; Roger, Andrew J

    2014-04-01

    Standard protein phylogenetic models use fixed rate matrices of amino acid interchange derived from analyses of large databases. Differences between the stationary amino acid frequencies of these rate matrices from those of a data set of interest are typically adjusted for by matrix multiplication that converts the empirical rate matrix to an exchangeability matrix which is then postmultiplied by the amino acid frequencies in the alignment. The result is a time-reversible rate matrix with stationary amino acid frequencies equal to the data set frequencies. On the basis of population genetics principles, we develop an amino acid substitution-selection model that parameterizes the fitness of an amino acid as the logarithm of the ratio of the frequency of the amino acid to the frequency of the same amino acid under no selection. The model gives rise to a different sequence of matrix multiplications to convert an empirical rate matrix to one that has stationary amino acid frequencies equal to the data set frequencies. We incorporated the substitution-selection model with an improved amino acid class frequency mixture (cF) model to partially take into account site-specific amino acid frequencies in the phylogenetic models. We show that 1) the selection models fit data significantly better than corresponding models without selection for most of the 21 test data sets; 2) both cF and cF selection models favored the phylogenetic trees that were inferred under current sophisticated models and methods for three difficult phylogenetic problems (the positions of microsporidia and breviates in eukaryote phylogeny and the position of the root of the angiosperm tree); and 3) for data simulated under site-specific residue frequencies, the cF selection models estimated trees closer to the generating trees than a standard Г model or cF without selection. We also explored several ways of estimating amino acid frequencies under neutral evolution that are required for these selection

  9. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    Science.gov (United States)

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  10. The use of satellites in gravity field determination and model adjustment

    Science.gov (United States)

    Visser, Petrus Nicolaas Anna Maria

    1992-06-01

    Methods to improve gravity field models of the Earth with available data from satellite observations are proposed and discussed. In principle, all types of satellite observations mentioned give information of the satellite orbit perturbations and in conjunction the Earth's gravity field, because the satellite orbits are affected most by the Earth's gravity field. Therefore, two subjects are addressed: representation forms of the gravity field of the Earth and the theory of satellite orbit perturbations. An analytical orbit perturbation theory is presented and shown to be sufficiently accurate for describing satellite orbit perturbations if certain conditions are fulfilled. Gravity field adjustment experiments using the analytical orbit perturbation theory are discussed using real satellite observations. These observations consisted of Seasat laser range measurements and crossover differences, and of Geosat altimeter measurements and crossover differences. A look into the future, particularly relating to the ARISTOTELES (Applications and Research Involving Space Techniques for the Observation of the Earth's field from Low Earth Orbit Spacecraft) mission, is given.

  11. Performance analysis of adjustable window based FIR filter for noisy ECG Signal Filtering

    Directory of Open Access Journals (Sweden)

    N. Mahawar

    2013-09-01

    Full Text Available Recording of the electrical activity associated to heart functioning is known as Electrocardiogram (ECG. ECG is a quasi-periodical, rhythmically signal synchronized by the function of the heart, which acts as a generator of bioelectric events. ECG signals are low level signals and sensitive to external contaminations. Electrocardiogram signals are often corrupted by noise which may have electrical or electrophysiological origin. The noise signal tends to alter the signal morphology, thereby hindering the correct diagnosis. In order to remove the unwanted noise, a digital filtering technique based on adjustable windows is proposed in this paper. Finite Impulse Response (FIR low pass is designed using windowing method for the ECG signal. The results obtained from different techniques are compared on the basis of popularly used signal error measures like SNR, PRD, PRD1, and MSE.

  12. A High-Precision Registration Technology Based on Bundle Adjustment in Structured Light Scanning System

    Directory of Open Access Journals (Sweden)

    Jianying Yuan

    2014-01-01

    Full Text Available The multiview 3D data registration precision will decrease with the increasing number of registrations when measuring a large scale object using structured light scanning. In this paper, we propose a high-precision registration method based on multiple view geometry theory in order to solve this problem. First, a multiview network is constructed during the scanning process. The bundle adjustment method from digital close range photogrammetry is used to optimize the multiview network to obtain high-precision global control points. After that, the 3D data under each local coordinate of each scan are registered with the global control points. The method overcomes the error accumulation in the traditional registration process and reduces the time consumption of the following 3D data global optimization. The multiview 3D scan registration precision and efficiency are increased. Experiments verify the effectiveness of the proposed algorithm.

  13. Posture Adjustment of Microphone Based on Image Recognition in Automatic Welding System

    Institute of Scientific and Technical Information of China (English)

    Wang Jin'e; Gao Ping; Huang Haibo; Li Xiangpeng; Zheng Liang; Xu Wenkui; Chen Liguo

    2015-01-01

    As the requirements of production process is getting higher and higher with the reduction of volume ,mi-crophone production automation become an urgent need to improve the production efficiency .The most important part is studied and a precise algorithm of calculating the deviation angle of four types microphones is proposed , based on the feature extraction and visual detection .Pretreatment is performed to achieve the real-time microphone image .Canny edge detection and typical feature extraction are used to distinguish the four types of microphones , categorizing them as type M 1 and type M2 .And Hough transformation is used to extract the image features of mi-crophone .Therefore ,the deviation angle between the posture of microphone and the ideal posture in 2D plane can be achieved .Depending on the angle ,the system drives the motor to adjust posture of the microphone .The final purpose is to realize the high efficiency welding of four different types of microphones .

  14. A multiphase three-dimensional multi-relaxation time (MRT) lattice Boltzmann model with surface tension adjustment

    Science.gov (United States)

    Ammar, Sami; Pernaudat, Guillaume; Trépanier, Jean-Yves

    2017-08-01

    The interdependence of surface tension and density ratio is a weakness of pseudo-potential based lattice Boltzmann models (LB). In this paper, we propose a 3D multi-relaxation time (MRT) model for multiphase flows at large density ratios. The proposed model is capable of adjusting the surface tension independently of the density ratio. We also present the 3D macroscopic equations recovered by the proposed forcing scheme. A high order of isotropy for the interaction force is used to reduce the amplitude of spurious currents. The proposed 3D-MRT model is validated by verifying Laplace's law and by analyzing its thermodynamic consistency and the oscillation period of a deformed droplet. The model is then applied to the simulation of the impact of a droplet on a dry surface. Impact dynamics are determined and the maximum spread factor calculated for different Reynolds and Weber numbers. The numerical results are in agreement with data published in the literature. The influence of surface wettability on the spread factor is also investigated. Finally, our 3D-MRT model is applied to the simulation of the impact of a droplet on a wet surface. The propagation of transverse waves is observed on the liquid surface.

  15. Model-based geostatistics

    CERN Document Server

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  16. Towards individualized dose constraints: Adjusting the QUANTEC radiation pneumonitis model for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane L; Vogelius, Ivan R.; Farr, Katherina P.;

    2014-01-01

    Background. Understanding the dose-response of the lung in order to minimize the risk of radiation pneumonitis (RP) is critical for optimization of lung cancer radiotherapy. We propose a method to combine the dose-response relationship for RP in the landmark QUANTEC paper with known clinical risk...... factors, in order to enable individual risk prediction. The approach is validated in an independent dataset. Material and methods. The prevalence of risk factors in the patient populations underlying the QUANTEC analysis was estimated, and a previously published method to adjust dose......-response relationships for clinical risk factors was employed. Effect size estimates (odds ratios) for risk factors were drawn from a recently published meta-analysis. Baseline values for D50 and γ50 were found. The method was tested in an independent dataset (103 patients), comparing the predictive power of the dose......-only QUANTEC model and the model including risk factors. Subdistribution cumulative incidence functions were compared for patients with high/low-risk predictions from the two models, and concordance indices (c-indices) for the prediction of RP were calculated. Results. The reference dose- response relationship...

  17. DYNAMIC LAYOUT ADJUSTMENT AND NAVIGATION FOR ENTERPRISE GIS BASED ON OBJECT MARK RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper a new method is developed to make a dynamic layout adjustment and navigation for enterprise Geographic Information System(GIS) based on object mark recognition. The extraction of object mark images is based on some morphological structural patterns, which are described by morphological structural points, contour property, and other geometrical data in a binary image of enterprise geographic information map. Some pre-processing methods, contour smooth following, linearization and extraction patterns of structural points, are introduced. If any special object is selected to make a decision in a GIS map, the all information around it will be obtained. That is, we need to investigate similar object enterprises around selected region to analyse whether it is necessary for establishing the object enterprise at that place. To further navigate GIS map, we need to move from one region to another. Each time a region is formed and displayed based on the user′ s focus. If a focus point of a map is selected, in terms of extracted object mark image, a dynamic layout and navigation diagram is constructed. When the user changes the focus (i.e. click a node in the navigation mode), a new sub-diagram is formed by dropping old nodes and adding new nodes. The prototype system provides effective interfaces that support GIS image navigation, detailed local image/map viewing, and enterprise information browsing.

  18. Voltage adjusting characteristics in terahertz transmission through Fabry-Pérot-based metamaterials

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2015-10-01

    Full Text Available Metallic electric split-ring resonators (SRRs with featured size in micrometer scale, which are connected by thin metal wires, are patterned to form a periodically distributed planar array. The arrayed metallic SRRs are fabricated on an n-doped gallium arsenide (n-GaAs layer grown directly over a semi-insulating gallium arsenide (SI-GaAs wafer. The patterned metal microstructures and n-GaAs layer construct a Schottky diode, which can support an external voltage applied to modify the device properties. The developed architectures present typical functional metamaterial characters, and thus is proposed to reveal voltage adjusting characteristics in the transmission of terahertz waves at normal incidence. We also demonstrate the terahertz transmission characteristics of the voltage controlled Fabry-Pérot-based metamaterial device, which is composed of arrayed metallic SRRs. To date, many metamaterials developed in earlier works have been used to regulate the transmission amplitude or phase at specific frequencies in terahertz wavelength range, which are mainly dominated by the inductance-capacitance (LC resonance mechanism. However, in our work, the external voltage controlled metamaterial device is developed, and the extraordinary transmission regulation characteristics based on both the Fabry-Pérot (FP resonance and relatively weak surface plasmon polariton (SPP resonance in 0.025-1.5 THz range, are presented. Our research therefore shows a potential application of the dual-mode-resonance-based metamaterial for improving terahertz transmission regulation.

  19. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  20. Civilian Reuse of Former Military Bases. Summary of Completed Military Base Economic Adjustment Projects

    Science.gov (United States)

    1990-06-01

    Immobiliare Ltd . Boston National Planning & Development and John Charlestown (c) Historic Park, Sail Magazine. O’Brien. Navy Yard Project Maneger, MA General...Crowder College, Commerce. Neosho, MO 64850 Municipal Airport (417) 451-1925 .,onrad. Montana 1972 153 50 Cascade Campers Ltd , Intercontinental...South Carolina 1963 672 5,253 Woolworth Distribution Center. 3M 500(C) Phillip Southerland. Executive Donaldson Air Force Base 1964 (4.100) Company

  1. A covariate-adjustment regression model approach to noninferiority margin definition.

    Science.gov (United States)

    Nie, Lei; Soon, Guoxing

    2010-05-10

    To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well-developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate-adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate-adjustment approach through a case study.

  2. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  3. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  4. Modeling and test on height adjustment system of electrically-controlled air suspension for agricultural vehicles

    National Research Council Canada - National Science Library

    Chen Yuexia; Chen Long; Wang Ruochen; Xu Xing; Shen Yujie; Liu Yanling

    2016-01-01

      To reduce the damages of pavement, vehicle components and agricultural product during transportation, an electric control air suspension height adjustment system of agricultural transport vehicle...

  5. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Science.gov (United States)

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.

  6. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  7. Dynamic Stall Prediction of a Pitching Airfoil using an Adjusted Two-Equation URANS Turbulence Model

    Directory of Open Access Journals (Sweden)

    Galih Bangga

    2017-01-01

    Full Text Available The necessity in the analysis of dynamic stall becomes increasingly important due to its impact on many streamlined structures such as helicopter and wind turbine rotor blades. The present paper provides Computational Fluid Dynamics (CFD predictions of a pitching NACA 0012 airfoil at reduced frequency of 0.1 and at small Reynolds number value of 1.35e5. The simulations were carried out by adjusting the k − ε URANS turbulence model in order to damp the turbulence production in the near wall region. The damping factor was introduced as a function of wall distance in the buffer zone region. Parametric studies on the involving variables were conducted and the effect on the prediction capability was shown. The results were compared with available experimental data and CFD simulations using some selected two-equation turbulence models. An improvement of the lift coefficient prediction was shown even though the results still roughly mimic the experimental data. The flow development under the dynamic stall onset was investigated with regards to the effect of the leading and trailing edge vortices. Furthermore, the characteristics of the flow at several chords length downstream the airfoil were evaluated.

  8. Stochastic Dynamic Model on the Consumption – Saving Decision for Adjusting Products and Services Supply According with Consumers` Attainability

    Directory of Open Access Journals (Sweden)

    Gabriela Prelipcean

    2014-02-01

    Full Text Available The recent crisis and turbulences have significantly changed the consumers’ behavior, especially through its access possibility and satisfaction, but also the new dynamic flexible adjustment of the supply of goods and services. The access possibility and consumer satisfaction should be analyzed in a broader context of corporate responsibility, including financial institutions. This contribution gives an answer to the current situation in Romania as an emerging country, strongly affected by the global crisis. Empowering producers and harmonize their interests with the interests of consumers really require a significant revision of the quantitative models used to study long-term consumption-saving behavior, with a new model, adapted to the current conditions in Romania in the post-crisis context. Based on the general idea of the model developed by Hai, Krueger, Postlewaite (2013 we propose a new way of exploiting the results considering the dynamics of innovative adaptation based on Brownian motion, but also the integration of the cyclicality concept, the stochastic shocks analyzed by Lèvy and extensive interaction with capital markets characterized by higher returns and volatility.

  9. PV Array Driven Adjustable Speed Drive for a Lunar Base Heat Pump

    Science.gov (United States)

    Domijan, Alexander, Jr.; Buchh, Tariq Aslam

    1995-01-01

    A study of various aspects of Adjustable Speed Drives (ASD) is presented. A summary of the relative merits of different ASD systems presently in vogue is discussed. The advantages of using microcomputer based ASDs is now widely understood and accepted. Of the three most popular drive systems, namely the Induction Motor Drive, Switched Reluctance Motor Drive and Brushless DC Motor Drive, any one may be chosen. The choice would depend on the nature of the application and its requirements. The suitability of the above mentioned drive systems for a photovoltaic array driven ASD for an aerospace application are discussed. The discussion is based on the experience of the authors, various researchers and industry. In chapter 2 a PV array power supply scheme has been proposed, this scheme will have an enhanced reliability in addition to the other known advantages of the case where a stand alone PV array is feeding the heat pump. In chapter 3 the results of computer simulation of PV array driven induction motor drive system have been included. A discussion on these preliminary simulation results have also been included in this chapter. Chapter 4 includes a brief discussion on various control techniques for three phase induction motors. A discussion on different power devices and their various performance characteristics is given in Chapter 5.

  10. Evaluation of heparin dosing based on adjusted body weight in obese patients.

    Science.gov (United States)

    Fan, Jingyang; John, Billee; Tesdal, Emily

    2016-10-01

    Results of a study to determine whether heparin dosing based on adjusted body weight (BWAdj) instead of actual body weight (ABW) can lead to faster achievement of therapeutic activated partial thromboplastin time (aPTT) values in obese patients are presented. A single-center retrospective cohort study was conducted to assess aPTT outcomes before and after implementation of a revised heparin protocol specifying BWAdj-based dosing for obese patients. The primary outcome was the percentage of first aPTT values within the target range after heparin initiation. Secondary outcomes included the median time to the first on-target aPTT and the rate of clinically significant bleeding. After protocol implementation, there was no significant difference between obese and nonobese patients in the primary outcome (17% and 21%, respectively, had first aPTT values in the target range) or in the median time to achieve the first on-target aPTT value. Among obese patients, on-target aPTT values were achieved significantly faster with BWAdj-versus ABW-based dosing (14 hours versus 24 hours, p = 0.002). Prior to implementation of BWAdj-based heparin dosing, obese patients had a higher rate of clinically significant bleeding than nonobese patients (11% versus 1%, p = 0.01); postimplementation bleeding rates did not differ significantly. The percentages of first aPTT values in the targeted range did not differ significantly in obese and nonobese patients before and after protocol implementation. The use of BWAdj for dose calculation in obese patients was associated with faster achievement of an aPTT value in the target range. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  11. ADJUSTMENT FACTORS AND ADJUSTMENT STRUCTURE

    Institute of Scientific and Technical Information of China (English)

    Tao Benzao

    2003-01-01

    In this paper, adjustment factors J and R put forward by professor Zhou Jiangwen are introduced and the nature of the adjustment factors and their role in evaluating adjustment structure is discussed and proved.

  12. Convexity Adjustments

    DEFF Research Database (Denmark)

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    of particular importance to practitioners: yield convexity adjustments, forward versus futures convexity adjustments, timing and quanto convexity adjustments. We claim that the appropriate way to look into any of these adjustments is as a side effect of a measure change, as proposed by Pelsser (2003...

  13. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    Science.gov (United States)

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  14. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Directory of Open Access Journals (Sweden)

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  15. 基于附加虚拟阻抗和蚁群优化算法的动态等效模型在线修正方法%An Online Adjustment Method of Dynamic Equivalent Model Based on Additional Fictitious Impedances and Ant Colony Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    周海强; 鞠平; 宋忠鹏; 金宇清; 孙国强

    2011-01-01

    A novel method of online adjustment of dynamic equivalent model was proposed in this paper. Firstly, it was pointed out that the unreasonable aggregation algorithm and constant hypothesis of the time-varying system were main sources of equivalent error. Then, online adjustment was put forward to overcome the error caused by the time-varying characteristics of the system. Parameters in the equivalent model were too many to adjust directly. And the additional fictitious impedances were introduced to overcome this difficulty. These impedances were connected to equivalent generator and equivalent motor buses. Injecting power match at boundary nodes has been achieved by adjustment of fictitious impedances with ant colony optimization (ACO) algorithm. Online adjustment can be further realized when the dynamic equivalent model is modified timely according to real time information provided by the wide area measurement system (WAMS). Finally, simulation results of the IEEE 10-generator and 39-bus test system showed that both the static and transient precisions can be enhanced largely with this method, and the robustness of the equivalent model can also be improved.%提出基于虚拟阻抗的动态等效模型在线修正方法。首先,指出等值过程中不合理的聚类算法、对时变系统的定常化假设是误差主要来源。接着,提出通过等效模型的在线修正以克服系统时变性所导致的误差。由于等效模型可调参数过多,难以对所有参数进行调整。为此,在等效发电机、等效电动机节点引入附加虚拟阻抗,应用蚁群优化算法进行调节,以实现边界点最佳功率匹配,并利用广域测量系统(wide area measurement system,WAMS)提供的实测数据对动态等效模型进行定时修正。最后,IEEE 10机39母线系统的等值计算结果表明:算法较好地改进了动态等效模型的静态精度与暂态精度,提高了模型的强壮性。

  16. Based on Motivation and Adjust Strategy%立足动机,调整策略

    Institute of Scientific and Technical Information of China (English)

    杨洪

    2012-01-01

    旅游经营者和游客之间存在着旅游供求关系,因此,旅游经营者必须要做到能够很好地了解旅游者的行为动机,才能调整经营策略来满足旅游者的需求。本文在尽可能透彻地了解旅游者新旧两种动机的基础上,有的放矢对旅游经营者经营策略进行了研究。%There exists a relationship of supply and demand between operators and the tourists;therefore,tour operators should be able to have a good understanding of tourist behavior and motivation so that they can adjust business strategy to meeting the needs of tourists.In this paper,based on a thorough understanding of old and new motivations of the tourists,the author has a detailed research in the operating strategy on tourism operators.

  17. Image-based human age estimation by manifold learning and locally adjusted robust regression.

    Science.gov (United States)

    Guo, Guodong; Fu, Yun; Dyer, Charles R; Huang, Thomas S

    2008-07-01

    Estimating human age automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, it is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human ages. The aging process is determined by not only the person's gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. The current age estimation performance is still not good enough for practical use and more effort has to be put into this research direction. In this paper, we introduce the age manifold learning scheme for extracting face aging features and design a locally adjusted robust regressor for learning and prediction of human ages. The novel approach improves the age estimation accuracy significantly over all previous methods. The merit of the proposed approaches for image-based age estimation is shown by extensive experiments on a large internal age database and the public available FG-NET database.

  18. Study on Electricity Business Expansion and Electricity Sales Based on Seasonal Adjustment

    Science.gov (United States)

    Zhang, Yumin; Han, Xueshan; Wang, Yong; Zhang, Li; Yang, Guangsen; Sun, Donglei; Wang, Bolun

    2017-05-01

    [1] proposed a novel analysis and forecast method of electricity business expansion based on Seasonal Adjustment, we extend this work to include the effect the micro and macro aspects, respectively. From micro aspect, we introduce the concept of load factor to forecast the stable value of electricity consumption of single new consumer after the installation of new capacity of the high-voltage transformer. From macro aspects, considering the growth of business expanding is also stimulated by the growth of electricity sales, it is necessary to analyse the antecedent relationship between business expanding and electricity sales. First, forecast electricity consumption of customer group and release rules of expanding capacity, respectively. Second, contrast the degree of fitting and prediction accuracy to find out the antecedence relationship and analyse the reason. Also, it can be used as a contrast to observe the influence of customer group in different ranges on the prediction precision. Finally, Simulation results indicate that the proposed method is accurate to help determine the value of expanding capacity and electricity consumption.

  19. Linear identification and model adjustment of a PEM fuel cell stack

    Energy Technology Data Exchange (ETDEWEB)

    Kunusch, C.; Puleston, P.F.; More, J.J. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A. [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M.A. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)

    2008-07-15

    In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)

  20. 基于模糊Kano模型的连锁配送服务质量需求属性判定及重要度调整%Property Determination and Importance Adjustment for Quality Requirements of Chain Distribution Service Based on Fuzzy Kano Model

    Institute of Scientific and Technical Information of China (English)

    林小芳; 王海船; 程林

    2016-01-01

    Based on the problems occurring in the implemen-tation process of traditional Kano model that customers' re-quirements is fuzzy and uncertain, this paper presented to in-troduce fuzzy mathematical method in the process, designed fuzzy Kano questionnaire and handled the data fuzzily. In ad-dition, the original weights of customers' requirements were determined by entropy method. Based on the experience and methods of attributes adjusting for different quality demands, this paper modified the improved factor and calculated the fi-nal importance of quality requirement, and studied the quality requirements of chain distribution service in that way. The re-sults showed that fuzzy Kano model could weaken the uncer-tainty of customer demand, and also could solve mixed de-mand attribute problems occurring in traditional Kano ques-tionnaire.%针对传统Kano模型应用过程中遇到的顾客需求模糊不确定的问题,提出将模糊数学方法引入,设计模糊Kano问卷并对数据进行模糊化处理。另外,利用熵值法进行需求初始权重的确定。结合学者对不同质量属性类别需求重要度调整的经验方法,对改进因子进行修正,利用修正后的改进因子确定最终质量需求重要度,并利用该方法对连锁配送服务质量属性进行了研究。结果表明,模糊Kano模型能够弱化顾客需求的不确定性,较好地解决传统Kano问卷中出现的混合需求属性问题。

  1. Adjusting for Cell Type Composition in DNA Methylation Data Using a Regression-Based Approach.

    Science.gov (United States)

    Jones, Meaghan J; Islam, Sumaiya A; Edgar, Rachel D; Kobor, Michael S

    2017-01-01

    Analysis of DNA methylation in a population context has the potential to uncover novel gene and environment interactions as well as markers of health and disease. In order to find such associations it is important to control for factors which may mask or alter DNA methylation signatures. Since tissue of origin and coinciding cell type composition are major contributors to DNA methylation patterns, and can easily confound important findings, it is vital to adjust DNA methylation data for such differences across individuals. Here we describe the use of a regression method to adjust for cell type composition in DNA methylation data. We specifically discuss what information is required to adjust for cell type composition and then provide detailed instructions on how to perform cell type adjustment on high dimensional DNA methylation data. This method has been applied mainly to Illumina 450K data, but can also be adapted to pyrosequencing or genome-wide bisulfite sequencing data.

  2. Uncertainty-based Estimation of the Secure Range for ISO New England Dynamic Interchange Adjustment

    Energy Technology Data Exchange (ETDEWEB)

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di; Hou, Zhangshuan; Sun, Yannan; Maslennikov, S.; Luo, Xiaochuan; Zheng, T.; George, S.; Knowland, T.; Litvinov, E.; Weaver, S.; Sanchez, E.

    2014-04-14

    The paper proposes an approach to estimate the secure range for dynamic interchange adjustment, which assists system operators in scheduling the interchange with neighboring control areas. Uncertainties associated with various sources are incorporated. The proposed method is implemented in the dynamic interchange adjustment (DINA) tool developed by Pacific Northwest National Laboratory (PNNL) for ISO New England. Simulation results are used to validate the effectiveness of the proposed method.

  3. A novel space-based observation strategy for GEO objects based on daily pointing adjustment of multi-sensors

    Science.gov (United States)

    Hu, Yun-peng; Li, Ke-bo; Xu, Wei; Chen, Lei; Huang, Jian-yu

    2016-08-01

    Space-based visible (SBV) program has been proved to be with a large advantage to observe geosynchronous earth orbit (GEO) objects. With the development of SBV observation started from 1996, many strategies have come out for the purpose of observing GEO objects more efficiently. However it is a big challenge to visit all the GEO objects in a relatively short time because of the distribution characteristics of GEO belt and limited field of view (FOV) of sensor. And it's also difficult to keep a high coverage of the GEO belt every day in a whole year. In this paper, a space-based observation strategy for GEO objects is designed based on the characteristics of the GEO belt. The mathematical formula of GEO belt is deduced and the evolvement of GEO objects is illustrated. There are basically two kinds of orientation strategies for most observation satellites, i.e., earth-oriented and inertia-directional. Influences of both strategies to their own observation regions are analyzed and compared with each other. A passive optical instrument with daily attitude-adjusting strategies is proposed to increase the daily coverage rate of GEO objects in a whole year. Furthermore, in order to observe more GEO objects in a relatively short time, the strategy of a satellite with multi-sensors is proposed. The installation parameters between different sensors are optimized, more than 98% of GEO satellites can be observed every day and almost all the GEO satellites can be observed every two days with 3 sensors (FOV: 6° × 6°) on the satellite under the strategy of daily pointing adjustment in a whole year.

  4. On Dynamic Adjustment of Capital Structure in Listed Companies in China Based on a Multiple Threshold Model%基于多重门限模型的我国上市公司资本结构动态调整研究

    Institute of Scientific and Technical Information of China (English)

    段军山; 宋贺

    2012-01-01

    From the perspective of dynamic trade-off theory, this paper employs a multiple threshold panel model to study the dynamic adjustment path of the capital structure of A-share listed companies in Shanghai Stock Exchange and Shenzhen Stock Exchange in China from 2000 to 2009. The result shows that the adjustment of capital structure is characterized by a- symmetry which means that as the debt ratio rises, the effect of operating performance on the adjustment speed gradually weakens. The adjustment speed in state-owned companies is lower, indicating that soft budget con- straints and a lack of supervision in state-owned enterprises are in urgent need of solution in the reform of state-owned enterprises in China. And ow- ing to the effect of institutional adjustment speed, auditor sign transmission effect can not exert an obvious function in the adjustment of corporate capital structure.%文章以动态权衡理论为视角,采用多重门限面板模型,研究了2000—2009年我国沪深A股上市公司资本结构的动态调整路径。研究结果表明,我国上市公司资本结构的调整存在非对称现象,即随着负债率的提高,经营绩效对调整速度的影响逐步减弱。国有控股公司的调整速度较慢,表明国有企业的预算软约束和监管缺失等是国有企业亟须解决的问题;由于制度调整成本的影响,审计师信号传递效应未能在公司资本结构调整中发挥明显作用。

  5. Sensitivity assessment, adjustment, and comparison of mathematical models describing the migration of pesticides in soil using lysimetric data

    Science.gov (United States)

    Shein, E. V.; Kokoreva, A. A.; Gorbatov, V. S.; Umarova, A. B.; Kolupaeva, V. N.; Perevertin, K. A.

    2009-07-01

    The water block of physically founded models of different levels (chromatographic PEARL models and dual-porosity MACRO models) was parameterized using laboratory experimental data and tested using the results of studying the water regime of loamy soddy-podzolic soil in large lysimeters of the Experimental Soil Station of Moscow State University. The models were adapted using a stepwise approach, which involved the sequential assessment and adjustment of each submodel. The models unadjusted for the water block underestimated the lysimeter flow and overestimated the soil water content. The theoretical necessity of the model adjustment was explained by the different scales of the experimental objects (soil samples) and simulated phenomenon (soil profile). The adjustment of the models by selecting the most sensitive hydrophysical parameters of the soils (the approximation parameters of the soil water retention curve (SWRC)) gave good agreement between the predicted moisture profiles and their actual values. In distinction from the PEARL model, the MARCO model reliably described the migration of a pesticide through the soil profile, which confirmed the necessity of physically founded models accounting for the separation of preferential flows in the pore space for the prediction, analysis, optimization, and management of modern agricultural technologies.

  6. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    Science.gov (United States)

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  7. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  8. Transmission History Based Distributed Adaptive Contention Window Adjustment Algorithm Cooperating with Automatic Rate Fallback for Wireless LANs

    Science.gov (United States)

    Ogawa, Masakatsu; Hiraguri, Takefumi; Nishimori, Kentaro; Takaya, Kazuhiro; Murakawa, Kazuo

    This paper proposes and investigates a distributed adaptive contention window adjustment algorithm based on the transmission history for wireless LANs called the transmission-history-based distributed adaptive contention window adjustment (THAW) algorithm. The objective of this paper is to reduce the transmission delay and improve the channel throughput compared to conventional algorithms. The feature of THAW is that it adaptively adjusts the initial contention window (CWinit) size in the binary exponential backoff (BEB) algorithm used in the IEEE 802.11 standard according to the transmission history and the automatic rate fallback (ARF) algorithm, which is the most basic algorithm in automatic rate controls. This effect is to keep CWinit at a high value in a congested state. Simulation results show that the THAW algorithm outperforms the conventional algorithms in terms of the channel throughput and delay, even if the timer in the ARF is changed.

  9. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com; Tkachenko, N. P. [Institute for High Energy Physics, National Research Center Kurchatov Institute, COMPAS Group (Russian Federation)

    2015-12-15

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  10. School-Based Racial and Gender Discrimination among African American Adolescents: Exploring Gender Variation in Frequency and Implications for Adjustment

    Science.gov (United States)

    Chavous, Tabbye M.; Griffin, Tiffany M.

    2012-01-01

    The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple regression analyses within gender groups indicated that among girls and boys, racial discrimination and gender discrimination predicted higher depressive symptoms and school importance and racial discrimination predicted self-esteem. Racial and gender discrimination were also negatively associated with grade point average among boys but were not significantly associated in girls’ analyses. Significant gender discrimination X racial discrimination interactions resulted in the girls’ models predicting psychological outcomes and in boys’ models predicting academic achievement. Taken together, findings suggest the importance of considering gender- and race-related experiences in understanding academic and psychological adjustment among African American adolescents. PMID:22837794

  11. Melt-processable hydrophobic acrylonitrile-based copolymer systems with adjustable elastic properties designed for biomedical applications.

    Science.gov (United States)

    Cui, J; Trescher, K; Kratz, K; Jung, F; Hiebl, B; Lendlein, A

    2010-01-01

    Acrylonitrile-based polymer systems (PAN) are comprehensively explored as versatile biomaterials having various potential biomedical applications, such as membranes for extra corporal devices or matrixes for guided skin reconstruction. The surface properties (e.g. hydrophilicity or charges) of such materials can be tailored over a wide range by variation of molecular parameters such as different co-monomers or their sequence structure. Some of these materials show interesting biofunctionalities such as capability for selective cell cultivation. So far, the majority of AN-based copolymers, which were investigated in physiological environments, were processed from the solution (e.g. membranes), as these materials are thermo-sensitive and might degrade when heated. In this work we aimed at the synthesis of hydrophobic, melt-processable AN-based copolymers with adjustable elastic properties for preparation of model scaffolds with controlled pore geometry and size. For this purpose a series of copolymers from acrylonitrile and n-butyl acrylate (nBA) was synthesized via free radical copolymerisation technique. The content of nBA in the copolymer varied from 45 wt% to 70 wt%, which was confirmed by 1H-NMR spectroscopy. The glass transition temperatures (Tg) of the P(AN-co-nBA) copolymers determined by differential scanning calorimetry (DSC) decreased from 58 degrees C to 20 degrees C with increasing nBA-content, which was in excellent agreement with the prediction of the Gordon-Taylor equation based on the Tgs of the homopolymers. The Young's modulus obtained in tensile tests was found to decrease significantly with rising nBA-content from 1062 MPa to 1.2 MPa. All copolymers could be successfully processed from the melt with processing temperatures ranging from 50 degrees C to 170 degrees C, whereby thermally induced decomposition was only observed at temperatures higher than 320 degrees C in thermal gravimetric analysis (TGA). Finally, the melt processed P

  12. Rejection, Feeling Bad, and Being Hurt: Using Multilevel Modeling to Clarify the Link between Peer Group Aggression and Adjustment

    Science.gov (United States)

    Rulison, Kelly L.; Gest, Scott D.; Loken, Eric; Welsh, Janet A.

    2010-01-01

    The association between affiliating with aggressive peers and behavioral, social and psychological adjustment was examined. Students initially in 3rd, 4th, and 5th grade (N = 427) were followed biannually through 7th grade. Students' peer-nominated groups were identified. Multilevel modeling was used to examine the independent contributions of…

  13. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    Science.gov (United States)

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  14. Patterns of Children's Adrenocortical Reactivity to Interparental Conflict and Associations with Child Adjustment: A Growth Mixture Modeling Approach

    Science.gov (United States)

    Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.

    2013-01-01

    Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…

  15. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the cov......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...

  16. Adolescent Sibling Relationship Quality and Adjustment: Sibling Trustworthiness and Modeling, as Factors Directly and Indirectly Influencing These Associations

    Science.gov (United States)

    Gamble, Wendy C.; Yu, Jeong Jin; Kuehn, Emily D.

    2011-01-01

    The main goal of this study was to examine the direct and moderating effects of trustworthiness and modeling on adolescent siblings' adjustment. Data were collected from 438 families including a mother, a younger sibling in fifth, sixth, or seventh grade (M = 11.6 years), and an older sibling (M = 14.3 years). Respondents completed Web-based…

  17. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  18. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    Science.gov (United States)

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  19. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    Science.gov (United States)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  20. Constraints of GRACE on the Ice Model and Mantle Rheology in Glacial Isostatic Adjustment Modeling in North-America

    Science.gov (United States)

    van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.

    2009-05-01

    GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice

  1. Hypertension: Development of a prediction model to adjust self-reported hypertension prevalence at the community level

    Directory of Open Access Journals (Sweden)

    Mentz Graciela

    2012-09-01

    Full Text Available Abstract Background Accurate estimates of hypertension prevalence are critical for assessment of population health and for planning and implementing prevention and health care programs. While self-reported data is often more economically feasible and readily available compared to clinically measured HBP, these reports may underestimate clinical prevalence to varying degrees. Understanding the accuracy of self-reported data and developing prediction models that correct for underreporting of hypertension in self-reported data can be critical tools in the development of more accurate population level estimates, and in planning population-based interventions to reduce the risk of, or more effectively treat, hypertension. This study examines the accuracy of self-reported survey data in describing prevalence of clinically measured hypertension in two racially and ethnically diverse urban samples, and evaluates a mechanism to correct self-reported data in order to more accurately reflect clinical hypertension prevalence. Methods We analyze data from the Detroit Healthy Environments Partnership (HEP Survey conducted in 2002 and the National Health and Nutrition Examination (NHANES 2001–2002 restricted to urban areas and participants 25 years and older. We re-calibrate measures of agreement within the HEP sample drawing upon parameter estimates derived from the NHANES urban sample, and assess the quality of the adjustment proposed within the HEP sample. Results Both self-reported and clinically assessed prevalence of hypertension were higher in the HEP sample (29.7 and 40.1, respectively compared to the NHANES urban sample (25.7 and 33.8, respectively. In both urban samples, self-reported and clinically assessed prevalence is higher than that reported in the full NHANES sample in the same year (22.9 and 30.4, respectively. Sensitivity, specificity and accuracy between clinical and self-reported hypertension prevalence were ‘moderate to good’ within

  2. Capital and Portfolio Risk under the Solvency Regulation Supervision Analysis of Partial Adjustment Model based on Property-liability Companies%偿付能力监管下韵资本与组合风险——基于产险公司局部联立调整模型的分析

    Institute of Scientific and Technical Information of China (English)

    王丽珍; 李秀芳

    2012-01-01

    Ever since 2007, capital and stock increases trend m insurance compamc~ o~ ,~ , the required capital size are expanding, and also the frequency of capital increasing is improving. Although the ap- pearance of this phenomenon is inevitable, owing to the costliness and scarcity of capital, insurance industry and China Insurance Regulatory Commission all have paid more attention to the upsurge about capital. Furthermore, if the insurance companies can' t raise capital timely, they will not operate normally, and then it is bound to endanger social stability and the interest of insurance consumers. So it is meaningful to study the capital under the solvency regulation supervision and the special development stage. The capital is applicable to withstand unexpected loss, in this sense, the adjustment of capital is corresponding with risk level. Thus, we combine capital with risk to re- search the development of china' s property-liability companies. According to the research paradigm of banking studies about capital structure and portfolio risk, we employ partial adjustment model to the insurance area. Through the panel data of 34 Property-Liability insurance companies, this paper conduct the Three-stage least square ( 3 SLS) procedure to estimate a simultaneous equations model and examines the impact of capital determina- tion and portfolio risk under solvency regulation. In addition to this, we also consider the other factors, such as the structure of lines, the scale of asset, the reinsurance ratio, the return of asset under the research framework. Based on this, robustness tests with two broad heading including five types subsamples also present consistent results. Our key findings include four aspects. First of all, we find that capitalized insurers increase capital faster than under- capitalized insurers, which is different from the condition of American. This result implies that, on account of the rapid development of insurance industry currently, insurance

  3. Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems

    Science.gov (United States)

    Malin, Jane T.; Schreckenghost, Debra K.

    2001-01-01

    In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.

  4. Seat Adjustment Design of an Intelligent Robotic Wheelchair Based on the Stewart Platform

    Directory of Open Access Journals (Sweden)

    Po Er Hsu

    2013-03-01

    Full Text Available A wheelchair user makes direct contact with the wheelchair seat, which serves as the interface between the user and the wheelchair, for much of any given day. Seat adjustment design is of crucial importance in providing proper seating posture and comfort. This paper presents a multiple‐DOF (degrees of freedom seat adjustment mechanism, which is intended to increase the independence of the wheelchair user while maintaining a concise structure, light weight, and intuitive control interface. This four‐axis Stewart platform is capable of heaving, pitching, and swaying to provide seat elevation, tilt‐in‐space, and sideways movement functions. The geometry and types of joints of this mechanism are carefully arranged so that only one actuator needs to be controlled, enabling the wheelchair user to adjust the seat by simply pressing a button. The seat is also equipped with soft pressure‐sensing pads to provide pressure management by adjusting the seat mechanism once continuous and concentrated pressure is detected. Finally, by comparing with the manual wheelchair, the proposed mechanism demonstrated the easier and more convenient operation with less effort for transfer assistance.

  5. Supervision of Teachers Based on Adjusted Arithmetic Learning in Special Education

    Science.gov (United States)

    Eriksson, Gota

    2008-01-01

    This article reports on 20 children's learning in arithmetic after teaching was adjusted to their conceptual development. The report covers periods from three months up to three terms in an ongoing intervention study of teachers and children in schools for the intellectually disabled and of remedial teaching in regular schools. The researcher…

  6. A Structural Equation Modeling Approach to the Study of Stress and Psychological Adjustment in Emerging Adults

    Science.gov (United States)

    Asberg, Kia K.; Bowers, Clint; Renk, Kimberly; McKinney, Cliff

    2008-01-01

    Today's society puts constant demands on the time and resources of all individuals, with the resulting stress promoting a decline in psychological adjustment. Emerging adults are not exempt from this experience, with an alarming number reporting excessive levels of stress and stress-related problems. As a result, the present study addresses the…

  7. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    Science.gov (United States)

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  8. Glacial isostatic adjustment associated with the Barents Sea ice sheet: A modelling inter-comparison

    Science.gov (United States)

    Auriac, A.; Whitehouse, P. L.; Bentley, M. J.; Patton, H.; Lloyd, J. M.; Hubbard, A.

    2016-09-01

    The 3D geometrical evolution of the Barents Sea Ice Sheet (BSIS), particularly during its late-glacial retreat phase, remains largely ambiguous due to the paucity of direct marine- and terrestrial-based evidence constraining its horizontal and vertical extent and chronology. One way of validating the numerous BSIS reconstructions previously proposed is to collate and apply them under a wide range of Earth models and to compare prognostic (isostatic) output through time with known relative sea-level (RSL) data. Here we compare six contrasting BSIS load scenarios via a spherical Earth system model and derive a best-fit, χ2 parameter using RSL data from the four main terrestrial regions within the domain: Svalbard, Franz Josef Land, Novaya Zemlya and northern Norway. Poor χ2 values allow two load scenarios to be dismissed, leaving four that agree well with RSL observations. The remaining four scenarios optimally fit the RSL data when combined with Earth models that have an upper mantle viscosity of 0.2-2 × 1021 Pa s, while there is less sensitivity to the lithosphere thickness (ranging from 71 to 120 km) and lower mantle viscosity (spanning 1-50 × 1021 Pa s). GPS observations are also compared with predictions of present-day uplift across the Barents Sea. Key locations where relative sea-level and GPS data would prove critical in constraining future ice-sheet modelling efforts are also identified.

  9. Filling Gaps in the Acculturation Gap-Distress Model: Heritage Cultural Maintenance and Adjustment in Mexican-American Families.

    Science.gov (United States)

    Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J

    2016-07-01

    The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.

  10. The Nonlinear Dynamic Adjustment of China's Interest Rate Term Structure: A Study Based on MS-VECM Model%中国利率期限结构的非线性动态调整:基于MS—VECM模型的研究途径

    Institute of Scientific and Technical Information of China (English)

    孙皓; 俞来雷

    2012-01-01

    基于MS—VECM模型对预期理论调整作用下的中国利率期限结构非线性动态过程进行的实证研究,结果表明:预期理论在中国利率期限结构中是成立的;中国利率期限结构具有两区制的非线性动态特征,可以按预期理论的调整强度将两种区制分别描述为“强调整区制”与“弱调整区制”;不同期限利率的平均变动幅度和平均风险溢价水平会随区制状态变化而发生变化,具有区制相依性,区制间的转移具有非对称性;利率期限结构与物价压力的非线性区制划分具有相似性,物价波动是利率期限结构非线性动态变化的重要原因。%This paper conducts an empirical study of the nonlinear dynamic process of China's term structure of interest rates under the adjustment of the expectation theory with MS-VECM model. The results indicate that the expectation theory is tenable in China's term structure of interest rates. The term structure of interest rates in China has the nonlinear dynamic characteristics of two regimes, which can be described as "strong adjustment regime" and "weak adjustment regime" according to the intensity of adjustment of the expectation theory. The average interest rate changes and the aver- age risk premium level at different terms will change along with the changes of the regimes, and the changing of each regime is asymmetric. The term structure of interest rates has some similarity as the non-linear zone partition of price pressure; the price fluctuation is one of the important reasons for the nonlinear dynamic changes of the term structure of interest rates.

  11. How do attachment dimensions affect bereavement adjustment? A mediation model of continuing bonds.

    Science.gov (United States)

    Yu, Wei; He, Li; Xu, Wei; Wang, Jianping; Prigerson, Holly G

    2016-04-30

    The current study aims to examine mechanisms underlying the impact of attachment dimensions on bereavement adjustment. Bereaved mainland Chinese participants (N=247) completed anonymous, retrospective, self-report surveys assessing attachment dimensions, continuing bonds (CB), grief symptoms and posttraumatic growth (PTG). Results demonstrated that attachment anxiety predicted grief symptoms via externalized CB and predicted PTG via internalized CB at the same time, whereas attachment avoidance positively predicted grief symptoms via externalized CB but negatively predicted PTG directly. Findings suggested that individuals with a high level of attachment anxiety could both suffer from grief and obtain posttraumatic growth after loss, but it depended on which kind of CB they used. By contrast, attachment avoidance was associated with a heightened risk of maladaptive bereavement adjustment. Future grief therapy may encourage the bereaved to establish CB with the deceased and gradually shift from externalized CB to internalized CB.

  12. A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics

    Directory of Open Access Journals (Sweden)

    B. Islam

    2008-06-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.

  13. Business cycle effects on portfolio credit risk: A simple FX Adjustment for a factor model

    OpenAIRE

    Sokolov, Yuri

    2010-01-01

    The recent economic crisis on the demand side of the economy affects the trends and volatilities of the exchange rates as well as the operating conditions of borrowers in emerging market economies. But the exchange rate depreciation creates both winners and losers. With a weaker exchange rate, exporters and net holders of foreign assets will benefit, and vice verse, those relying on import and net debtors in foreign currency will be hurt. This paper presents a simple FX adjustment framewor...

  14. Adjusting eptifibatide doses for renal impairment: a model of dosing agreement among various methods of estimating creatinine clearance.

    Science.gov (United States)

    Healy, Martha F; Speroni, Karen Gabel; Eugenio, Kenneth R; Murphy, Patricia M

    2012-04-01

    Because of the renal elimination and increased risk for bleeding events at supratherapeutic doses of eptifibatide, the manufacturer recommends dosing adjustment in patients with renal dysfunction. Methods commonly used to estimate renal dysfunction in hospital settings may be inconsistent with those studied and recommended by the manufacturer. To compare hypothetical renal dosing adjustments of eptifibatide using both the recommended method and several other commonly used formulas for estimating kidney function. Sex, age, weight, height, serum creatinine, and estimated glomerular filtration rate (eGFR) were obtained retrospectively from the records of patients who received eptifibatide during a 12-month period. Renal dosing decisions were determined for each patient based on creatinine clearance (CrCl) estimates via the Cockcroft-Gault formula (CG) with actual body weight (ABW), ideal body weight (IBW) or adjusted weight (ADJW), and eGFR from the Modification of Diet in Renal Disease formula. Percent agreement and Cohen κ were calculated comparing dosing decisions for each formula to the standard CG-ABW. In this analysis of 179 patients, percent agreement as compared to CG-ABW varied (CG-IBW: 90.50%, CG-ADJW: 95.53%, and eGFR: 93.30%). All κ coefficients were categorized as good. In the 20% of patients receiving an adjusted dose by any of the methods, 68.6% could have received a dose different from that determined using the CG-ABW formula. In the patients with renal impairment (CrCl <50 mL/min) in this study, two thirds would have received an unnecessary 50% dose adjustment discordant from the manufacturer's recommendation. Because failure to adjust eptifibatide doses in patients with renal impairment has led to increased bleeding events, practitioners may be inclined to err on the side of caution. However, studies have shown that suboptimal doses of eptifibatide lead to suboptimal outcomes. Therefore, correct dosing of eptifibatide is important to both patient

  15. Mistral project: identification and parameter adjustment. Theoretical part; Projet Mistral: identification et recalage des modeles. Etude theorique

    Energy Technology Data Exchange (ETDEWEB)

    Faille, D.; Codrons, B.; Gevers, M.

    1996-03-01

    This document belongs to the methodological part of the project MISTRAL, which builds a library of power plant models. The model equations are generally obtained from the first principles. The parameters are actually not always easily calculable (at least accurately) from the dimension data. We are therefore investigating the possibility of automatically adjusting the value of those parameters from experimental data. To do that, we must master the optimization algorithms and the techniques that are analyzing the model structure, like the identifiability theory. (authors). 7 refs., 1 fig., 1 append.

  16. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-01-01

    Full Text Available Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption.

  17. Convective moisture adjustment time scale as a key factor in regulating model amplitude of the Madden-Julian Oscillation

    Science.gov (United States)

    Jiang, Xianan; Zhao, Ming; Maloney, Eric D.; Waliser, Duane E.

    2016-10-01

    Despite its pronounced impacts on weather extremes worldwide, the Madden-Julian Oscillation (MJO) remains poorly represented in climate models. Here we present findings that point to some necessary ingredients to produce a strong MJO amplitude in a large set of model simulations from a recent model intercomparison project. While surface flux and radiative heating anomalies are considered important for amplifying the MJO, their strength per unit MJO precipitation anomaly is found to be negatively correlated to MJO amplitude across these multimodel simulations. However, model MJO amplitude is found to be closely tied to a model's convective moisture adjustment time scale, a measure of how rapidly precipitation must increase to remove excess column water vapor, or alternately the efficiency of surface precipitation generation per unit column water vapor anomaly. These findings provide critical insights into key model processes for the MJO and pinpoint a direction for improved model representation of the MJO.

  18. Criteria for Selecting and Adjusting Ground-Motion Models for Specific Target Regions: Application to Central Europe and Rock Sites

    Science.gov (United States)

    Cotton, Fabrice; Scherbaum, Frank; Bommer, Julian J.; Bungum, Hilmar

    2006-04-01

    A vital component of any seismic hazard analysis is a model for predicting the expected distribution of ground motions at a site due to possible earthquake scenarios. The limited nature of the datasets from which such models are derived gives rise to epistemic uncertainty in both the median estimates and the associated aleatory variability of these predictive equations. In order to capture this epistemic uncertainty in a seismic hazard analysis, more than one ground-motion prediction equation must be used, and the tool that is currently employed to combine multiple models is the logic tree. Candidate ground-motion models for a logic tree should be selected in order to obtain the smallest possible suite of equations that can capture the expected range of possible ground motions in the target region. This is achieved by starting from a comprehensive list of available equations and then applying criteria for rejecting those considered inappropriate in terms of quality, derivation or applicability. Once the final list of candidate models is established, adjustments must be applied to achieve parameter compatibility. Additional adjustments can also be applied to remove the effect of systematic differences between host and target regions. These procedures are applied to select and adjust ground-motion models for the analysis of seismic hazard at rock sites in West Central Europe. This region is chosen for illustrative purposes particularly because it highlights the issue of using ground-motion models derived from small magnitude earthquakes in the analysis of hazard due to much larger events. Some of the pitfalls of extrapolating ground-motion models from small to large magnitude earthquakes in low seismicity regions are discussed for the selected target region.

  19. Study of dual wavelength composite output of solid state laser based on adjustment of resonator parameters

    Science.gov (United States)

    Wang, Lei; Nie, Jinsong; Wang, Xi; Hu, Yuze

    2016-10-01

    The 1064nm fundamental wave (FW) and the 532nm second harmonic wave (SHW) of Nd:YAG laser have been widely applied in many fields. In some military applications requiring interference in both visible and near-infrared spectrum range, the de-identification interference technology based on the dual wavelength composite output of FW and SHW offers an effective way of making the device or equipment miniaturized and low cost. In this paper, the application of 1064nm and 532nm dual-wavelength composite output technology in military electro-optical countermeasure is studied. A certain resonator configuration that can achieve composite laser output with high power, high beam quality and high repetition rate is proposed. Considering the thermal lens effect, the stability of this certain resonator is analyzed based on the theory of cavity transfer matrix. It shows that with the increase of thermal effect, the intracavity fundamental mode volume decreased, resulting the peak fluctuation of cavity stability parameter. To explore the impact the resonator parameters does to characteristics and output ratio of composite laser, the solid-state laser's dual-wavelength composite output models in both continuous and pulsed condition are established by theory of steady state equation and rate equation. Throughout theoretical simulation and analysis, the optimal KTP length and best FW transmissivity are obtained. The experiment is then carried out to verify the correctness of theoretical calculation result.

  20. An ice flow modeling perspective on bedrock adjustment patterns of the Greenland ice sheet

    Directory of Open Access Journals (Sweden)

    M. Olaizola

    2012-11-01

    Full Text Available Since the launch in 2002 of the Gravity Recovery and Climate Experiment (GRACE satellites, several estimates of the mass balance of the Greenland ice sheet (GrIS have been produced. To obtain ice mass changes, the GRACE data need to be corrected for the effect of deformation changes of the Earth's crust. Recently, a new method has been proposed where ice mass changes and bedrock changes are simultaneously solved. Results show bedrock subsidence over almost the entirety of Greenland in combination with ice mass loss which is only half of the currently standing estimates. This subsidence can be an elastic response, but it may however also be a delayed response to past changes. In this study we test whether these subsidence patterns are consistent with ice dynamical modeling results. We use a 3-D ice sheet–bedrock model with a surface mass balance forcing based on a mass balance gradient approach to study the pattern and magnitude of bedrock changes in Greenland. Different mass balance forcings are used. Simulations since the Last Glacial Maximum yield a bedrock delay with respect to the mass balance forcing of nearly 3000 yr and an average uplift at present of 0.3 mm yr−1. The spatial pattern of bedrock changes shows a small central subsidence as well as more intense uplift in the south. These results are not compatible with the gravity based reconstructions showing a subsidence with a maximum in central Greenland, thereby questioning whether the claim of halving of the ice mass change is justified.

  1. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    OpenAIRE

    Yanfei Kou; Jieming Yang; Ziming Kou

    2016-01-01

    Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve) is proposed in this paper. The mathematic mode...

  2. Rigorous Strip Adjustment of Airborne Laserscanning Data Based on the Icp Algorithm

    Science.gov (United States)

    Glira, P.; Pfeifer, N.; Briese, C.; Ressl, C.

    2015-08-01

    Airborne Laser Scanning (ALS) is an efficient method for the acquisition of dense and accurate point clouds over extended areas. To ensure a gapless coverage of the area, point clouds are collected strip wise with a considerable overlap. The redundant information contained in these overlap areas can be used, together with ground-truth data, to re-calibrate the ALS system and to compensate for systematic measurement errors. This process, usually denoted as strip adjustment, leads to an improved georeferencing of the ALS strips, or in other words, to a higher data quality of the acquired point clouds. We present a fully automatic strip adjustment method that (a) uses the original scanner and trajectory measurements, (b) performs an on-the-job calibration of the entire ALS multisensor system, and (c) corrects the trajectory errors individually for each strip. Like in the Iterative Closest Point (ICP) algorithm, correspondences are established iteratively and directly between points of overlapping ALS strips (avoiding a time-consuming segmentation and/or interpolation of the point clouds). The suitability of the method for large amounts of data is demonstrated on the basis of an ALS block consisting of 103 strips.

  3. Siblings’ Perceptions of Differential Treatment, Fairness, and Jealousy and Adolescent Adjustment: A Moderated Indirect Effects Model

    Science.gov (United States)

    Loeser, Meghan K.; Whiteman, Shawn D.; McHale, Susan M.

    2016-01-01

    Youth's perception of parents’ differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings (M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (−1 SD) and average levels of fairness. At high levels of fairness (+1 SD) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment. PMID:27867295

  4. Siblings' Perceptions of Differential Treatment, Fairness, and Jealousy and Adolescent Adjustment: A Moderated Indirect Effects Model.

    Science.gov (United States)

    Loeser, Meghan K; Whiteman, Shawn D; McHale, Susan M

    2016-08-01

    Youth's perception of parents' differential treatment (PDT) are associated with maladjustment during adolescence. Although the direct relations between PDT and youth's maladjustment have been well established, the mechanisms underlying these associations remain unclear. We addressed this gap by examining whether sibling jealousy accounted for the links between PDT and youth's depressive symptoms, self-worth, and risky behaviors. Additionally, we examined whether youth's perceptions of fairness regarding their treatment as well as the gender constellation of the dyad moderated these indirect relations (i.e., moderated-indirect effects). Participants were first- and second-born adolescent siblings (M = 15.96, SD = .72 years for older siblings, M = 13.48, SD = 1.02 years for younger siblings) and their parents from 197 working and middle class European American families. Data were collected via home interviews. A series of Conditional Process Analyses revealed significant indirect effects of PDT through sibling jealousy to all three adjustment outcomes. Furthermore, perceptions of fairness moderated the relations between PDT and jealousy, such that the indirect effects were only significant at low (-1 SD) and average levels of fairness. At high levels of fairness (+1 SD) there was no association between PDT, jealousy, and youth adjustment. Taken together, results indicate that youth and parents would benefit from engaging in clear communication regarding the reasoning for the occurrence of differential treatment, likely maximizing youth and parent perceptions of that treatment as being fair, and in turn mitigating sibling jealousy and maladjustment.

  5. 我国通货膨胀路径中的结构性变化与非线性调整——基于TV-STAR模型的实证分析%Structural Change and Nonlinear Adjustment in Chinese Inflation Rate Path: Empirical Analysis Based on TV -STAR Model

    Institute of Scientific and Technical Information of China (English)

    吴吉林

    2012-01-01

    本文使用TV-STAR模型研究发现,我国通货膨胀路径中结构性变化与非线性共存。结构性变化发生在1995年左右,结构性变化前的通货膨胀路径中存在高、低两个均衡点,结构性变化后的通货膨胀路径中存在唯一均衡点,但通货膨胀的持久性较变化前有较大幅度上升。同时发现通货膨胀与通货紧缩的非线性调整具有明显的非对称性,其临界值为4.091。基于广义脉冲响应函数发现,结构性变化后,我国通货膨胀对外来冲击的反应幅度下降,但反应速度上升。另外,正、负向冲击对通货膨胀的影响存在明显的非线性和非对称性。在多数情况下,正向冲击的影响更大,也更持久。在短期内,通货膨胀机制下的冲击影响要明显强于在通货紧缩机制下,但从长期来开,通货紧缩机制下的冲击影响更持久。在结构性变化之后,这种冲击的非对称效应也更明显。%This paper employs TV - STAR model to study the dynamic features of inflation rate in China, and finds there exist structural change and nonlinearity in the inflation rate, the nonlinear adjustment of inflation and deflation shows obvious asymmetries, and their transition threshold is 4. 091%. The structural change takes place around 1995, before this point exist one high and one low equilibrium point in the inflation path, but after this point only exist one equilibrium point, furthermore, the inflation persistence increases a lot compared with before. Based on generalized impulse response function, we find the response amplitude decreases, but response speed increases a lot. In addition, positive and negative shock effects on the inflation show obvious nonlinearity and asymmetries, in most cases, the positive shock effect is much bigger and much more persistent. In the short run , the shock effect in inflation regime is much stronger than in deflation regime, but in the long run, the

  6. Distributed hydrological models: comparison between TOPKAPI, a physically based model and TETIS, a conceptually based model

    Science.gov (United States)

    Ortiz, E.; Guna, V.

    2009-04-01

    The present work aims to carry out a comparison between two distributed hydrological models, the TOPKAPI (Ciarapica and Todini, 1998; Todini and Ciarapica, 2001) and TETIS (Vélez, J. J.; Vélez J. I. and Francés, F, 2002) models, obtaining the hydrological solution computed on the basis of the same storm events. The first model is physically based and the second one is conceptually based. The analysis was performed on the 21,4 km2 Goodwin Creek watershed, located in Panola County, Mississippi. This watershed extensively monitored by the Agricultural Research Service (ARS) National Sediment Laboratory (NSL) has been chosen because it offers a complete database compiling precipitation (16 rain gauges), runoff (6 discharge stations) and GIS data. Three storm events were chosen to evaluate the performance of the two models: the first one was chosen to calibrate the models, and the other two to validate them. Both models performed a satisfactory hydrological response both in calibration and validation events. While for the TOPKAPI model it wasn't a real calibration, due to its really good performance with parameters modal values derived of watershed characteristics, for the TETIS model it has been necessary to perform a previous automatic calibration. This calibration was carried out using the data provided by the observed hydrograph, in order to adjust the modeĺs 9 correction factors. Keywords: TETIS, TOPKAPI, distributed models, hydrological response, ungauged basins.

  7. Study on Optimization of Electromagnetic Relay's Reaction Torque Characteristics Based on Adjusted Parameters

    Science.gov (United States)

    Zhai, Guofu; Wang, Qiya; Ren, Wanbin

    The cooperative characteristics of electromagnetic relay's attraction torque and reaction torque are the key property to ensure its reliability, and it is important to attain better cooperative characteristics by analyzing and optimizing relay's electromagnetic system and mechanical system. From the standpoint of changing reaction torque of mechanical system, in this paper, adjusted parameters (armature's maximum angular displacement αarm_max, initial return spring's force Finiti_return_spring, normally closed (NC) contacts' force FNC_contacts, contacts' gap δgap, and normally opened (NO) contacts' over travel δNO_contacts) were adopted as design variables, and objective function was provided for with the purpose of increasing breaking velocities of both NC contacts and NO contacts. Finally, genetic algorithm (GA) was used to attain optimization of the objective function. Accuracy of calculation for the relay's dynamic characteristics was verified by experiment.

  8. Dynamic Online Bandwidth Adjustment Scheme Based on Kalai-Smorodinsky Bargaining Solution

    Science.gov (United States)

    Kim, Sungwook

    Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.

  9. Subwavelength ripples adjustment based on electron dynamics control by using shaped ultrafast laser pulse trains.

    Science.gov (United States)

    Jiang, Lan; Shi, Xuesong; Li, Xin; Yuan, Yanping; Wang, Cong; Lu, Yongfeng

    2012-09-10

    This study reveals that the periods, ablation areas and orientations of periodic surface structures (ripples) in fused silica can be adjusted by using designed femtosecond (fs) laser pulse trains to control transient localized electron dynamics and corresponding material properties. By increasing the pulse delays from 0 to 100 fs, the ripple periods are changed from ~550 nm to ~255 nm and the orientation is rotated by 90°. The nearwavelength/subwavelength ripple periods are close to the fundamental/second-harmonic wavelengths in fused silica respectively. The subsequent subpulse of the train significantly impacts free electron distributions generated by the previous subpulse(s), which might influence the formation mechanism of ripples and the surface morphology.

  10. A limit-cycle model of leg movements in cross-country skiing and its adjustments with fatigue.

    Science.gov (United States)

    Cignetti, F; Schena, F; Mottet, D; Rouard, A

    2010-08-01

    Using dynamical modeling tools, the aim of the study was to establish a minimal model reproducing leg movements in cross-country skiing, and to evaluate the eventual adjustments of this model with fatigue. The participants (N=8) skied on a treadmill at 90% of their maximal oxygen consumption, up to exhaustion, using the diagonal stride technique. Qualitative analysis of leg kinematics portrayed in phase planes, Hooke planes, and velocity profiles suggested the inclusion in the model of a linear stiffness and an asymmetric van der Pol-type nonlinear damping. Quantitative analysis revealed that this model reproduced the observed kinematics patterns of the leg with adequacy, accounting for 87% of the variance. A rising influence of the stiffness term and a dropping influence of the damping terms were also evidenced with fatigue. The meaning of these changes was discussed in the framework of motor control.

  11. Chiropractic Adjustment

    Science.gov (United States)

    ... syndrome) A certain type of stroke (vertebral artery dissection) after neck manipulation Don't seek chiropractic adjustment ... Chiropractic treatment. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2015. Shekelle P, et al. Spinal ...

  12. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases

    Directory of Open Access Journals (Sweden)

    Flor-Serra Ferran

    2009-06-01

    Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression

  13. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  14. Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery

    Science.gov (United States)

    Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr

    2008-01-01

    The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to

  15. Adjusting to Random Demands of Patient Care: A Predictive Model for Nursing Staff Scheduling at Naval Medical Center San Diego

    Science.gov (United States)

    2008-09-01

    West Model Adult ICU Model Holt-Winters’ Expo Smoothing Model Month Total Req. Acuity Adj. FTEs Leve l Tren d Seaso n Predicted FTE’s...Surgical Model Medical Total Req. FTE’s Based Holt-Winters’ Expo Smoothing Model Month Workload Acuity/FTE’ s Level Tren d Season Predicte d

  16. Mathematical models for adjustment of in vitro gas production at different incubation times and kinetics of corn silages

    Directory of Open Access Journals (Sweden)

    João Pedro Velho

    2014-09-01

    Full Text Available In the present work, with whole plant silage corn at different stages of maturity, aimed to evaluate the mathematical models Exponential, France, Gompertz and Logistic to study the kinetics of gas production in vitro incubations for 24 and 48 hours. A semi-automated in vitro gas production technique was used during one, three, six, eight, ten, 12, 14, 16, 22, 24, 31, 36, 42 and 48 hours of incubation periods. Model adjustment was evaluated by means of mean square of error, mean bias, root mean square prediction error and residual error. The Gompertz mathematical model allowed the best adjustment to describe the gas production kinetics of maize silages, regardless of incubation period. The France model was not adequate to describe gas kinetics of incubation periods equal or lower than 48 hours. The in vitro gas production technique was efficient to detect differences in nutritional value of maize silages from different growth stages. Twenty four hours in vitro incubation periods do not mask treatment effects, whilst 48 hour periods are inadequate to measure silage digestibility.

  17. [Motion control of moving mirror based on fixed-mirror adjustment in FTIR spectrometer].

    Science.gov (United States)

    Li, Zhong-bing; Xu, Xian-ze; Le, Yi; Xu, Feng-qiu; Li, Jun-wei

    2012-08-01

    The performance of the uniform motion of the moving mirror, which is the only constant motion part in FTIR spectrometer, and the performance of the alignment of the fixed mirror play a key role in FTIR spectrometer, and affect the interference effect and the quality of the spectrogram and may restrict the precision and resolution of the instrument directly. The present article focuses on the research on the uniform motion of the moving mirror and the alignment of the fixed mirror. In order to improve the FTIR spectrometer, the maglev support system was designed for the moving mirror and the phase detection technology was adopted to adjust the tilt angle between the moving mirror and the fixed mirror. This paper also introduces an improved fuzzy PID control algorithm to get the accurate speed of the moving mirror and realize the control strategy from both hardware design and algorithm. The results show that the development of the moving mirror motion control system gets sufficient accuracy and real-time, which can ensure the uniform motion of the moving mirror and the alignment of the fixed mirror.

  18. Virtual Environments, Online Racial Discrimination, and Adjustment among a Diverse, School-Based Sample of Adolescents

    Science.gov (United States)

    Tynes, Brendesha M.; Rose, Chad A.; Hiss, Sophia; Umaña-Taylor, Adriana J.; Mitchell, Kimberly; Williams, David

    2015-01-01

    Given the recent rise in online hate activity and the increased amount of time adolescents spend with media, more research is needed on their experiences with racial discrimination in virtual environments. This cross-sectional study examines the association between amount of time spent online, traditional and online racial discrimination and adolescent adjustment, including depressive symptoms, anxiety and externalizing behaviors. The study also explores the role that social identities, including race and gender, play in these associations. Online surveys were administered to 627 sixth through twelfth graders in K-8, middle and high schools. Multiple regression results revealed that discrimination online was associated with all three outcome variables. Additionally, a significant interaction between online discrimination by time online was found for externalizing behaviors indicating that increased time online and higher levels of online discrimination are associated with more problem behavior. This study highlights the need for clinicians, educational professionals and researchers to attend to race-related experiences online as well as in traditional environments. PMID:27134698

  19. The self-adjusting file (SAF) system: An evidence-based update.

    Science.gov (United States)

    Metzger, Zvi

    2014-09-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics.

  20. Analysis and Trend Determination of the Evolution of Tourist Accommodation Establishments (Adjusted Data Based Seasonally in the European Union (28 with Analytical Methods

    Directory of Open Access Journals (Sweden)

    Rodica Pripoaie

    2015-05-01

    Full Text Available This work presents the comparative analysis and trend determination of the evolution tourist accommodation establishments in the European Union (28, adjusted data based seasonally, in the period May 2014 - December 2014 used the Analytical Methods. The principal causes of the evolution tourist accommodation establishments were: the general economic evolution of industries and GDP per capita, the relatively low revenue or low development of the infrastructure. Trend determination of the evolution tourist accommodation establishments in the European Union (28 with analytical methods requires least squares method. On the base the results of the absolute deviations between empirical and theoretical values for the linear, curvilinear and modified exponential regression, will choose the best trend equation for the smallest variation. The best trend model for evolution tourist accommodation establishments in EU (28 is modelled using linear regression equation.

  1. Calibration and adjustment of center of mass(COM) based on EKF during in-flight phase

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The electrostatic accelerometer,assembled on gravity satellite,serves to measure all non-gravitational accelerations caused by atmosphere drag or solar radiation pressure,etc.The proof-mass center of the accelerometer needs to be precisely positioned at the center of gravity satellite,otherwise,the offset between them will bring measurement disturbance due to angular acceleration of satellite and gradient.Because of installation and measurement errors on the ground,fuel consumption during the in-flight phase and other adverse factors,the offset between the proof-mass center and the satellite center of mass is usually large enough to affect the measurement accuracy of the accelerometer,even beyond its range.Therefore,the offset needs to be measured or estimated,and then be controlled within the measurement requirement of the accelerometer by the center of mass(COM) adjustment mechanism during the life of the satellite.The estimation algorithm based on EKF,which uses the measurement of accelerometer,gyro and magnetometer,is put forward to estimate the offset,and the COM adjustment mechanism then adjusts the satellite center of mass in order to make the offset meet the requirement.With the special configuration layout,the COM adjustment mechanism driven by the stepper motors can separately regulate X,Y and Z axes.The associated simulation shows that the offset can be con-trolled better than 0.03 mm for all the axes with the method mentioned above.

  2. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  3. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    Science.gov (United States)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS

  4. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  5. Genetic programming-based chaotic time series modeling

    Institute of Scientific and Technical Information of China (English)

    张伟; 吴智铭; 杨根科

    2004-01-01

    This paper proposes a Genetic Programming-Based Modeling (GPM) algorithm on chaotic time series. GP is used here to search for appropriate model structures in function space, and the Particle Swarm Optimization (PSO) algorithm is used for Nonlinear Parameter Estimation (NPE) of dynamic model structures. In addition, GPM integrates the results of Nonlinear Time Series Analysis (NTSA) to adjust the parameters and takes them as the criteria of established models. Experiments showed the effectiveness of such improvements on chaotic time series modeling.

  6. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    DEFF Research Database (Denmark)

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai

    2016-01-01

    We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market data...

  7. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    Energy Technology Data Exchange (ETDEWEB)

    Mehran, Ali [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; AghaKouchak, Amir [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; Phillips, Thomas J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-02-25

    Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but that their replication of observed precipitation over arid regions and certain sub-continental regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (e.g., the 75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g. western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, inter-model variations in bias over Australia and Amazonia are considerable. The Quantile Bias (QB) analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. Lastly, we found that a simple mean-field bias removal improves the overall B and VHI values, but does not make a significant improvement in these model performance metrics at high quantiles of precipitation.

  8. Study of Offset Collisions and Beam Adjustment in the LHC Using a Strong-Strong Simulation Model

    CERN Document Server

    Muratori, B

    2002-01-01

    The bunches of the two opposing beams in the LHC do not always collide head-on. The beam-beam effects cause a small, unavoidable separation under nominal operational conditions. During the beam adjustment and when the beams are brought into collision the beams are separated by a significant fraction of the beam size. A result of small beam separation can be the excitation of coherent dipole oscillations or an emittance increase. These two effects are studied using a strong-strong multi particle simulation model. The aim is to identify possible limitations and to find procedures which minimise possible detrimental effects.

  9. LC Filter Design for Wide Band Gap Device Based Adjustable Speed Drives

    DEFF Research Database (Denmark)

    Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede

    2014-01-01

    the LC filter with a higher cut off frequency and without damping resistors. The selection of inductance and capacitance is chosen based on capacitor voltage ripple and current ripple. The filter adds a base load to the inverter, which increases the inverter losses. It is shown how the modulation index...

  10. Femtosecond laser-induced periodic structure adjustments based on electron dynamics control: from subwavelength ripples to double-grating structures.

    Science.gov (United States)

    Shi, Xuesong; Jiang, Lan; Li, Xin; Wang, Sumei; Yuan, Yanping; Lu, Yongfeng

    2013-10-01

    This study proposes a method for adjusting subwavelength ripple periods and the corresponding double-grating structures formed on fused silica by designing femtosecond laser pulse trains based on localized transient electron density control. Four near-constant period ranges of 190-490 nm of ripples perpendicular to the polarization are obtained by designing pulse trains to excite and modulate the surface plasmon waves. In the period range of 350-490 nm, the double-grating structure is fabricated in one step, which is probably attributable to the grating-assisted enhanced energy deposition and subsequent thermal effects.

  11. Research on Gear-box Fault Diagnosis Method Based on Adjusting-learning-rate PSO Neural Network

    Institute of Scientific and Technical Information of China (English)

    PAN Hong-xia; MA Qing-feng

    2006-01-01

    Based on the research of Particle Swarm Optimization (PSO) learning rate, two learning rates are changed linearly with velocity-formula evolving in order to adjust the proportion of social part and cognitional part; then the methods are applied to BP neural network training, the convergence rate is heavily accelerated and locally optional solution is avoided. According to actual data of two levels compound-box in vibration lab, signals are analyzed and their characteristic values are abstracted. By applying the trained BP neural networks to compound-box fault diagnosis, it is indicated that the methods are sound effective.

  12. The timing of the Black Sea flood event: Insights from modeling of glacial isostatic adjustment

    Science.gov (United States)

    Goldberg, Samuel L.; Lau, Harriet C. P.; Mitrovica, Jerry X.; Latychev, Konstantin

    2016-10-01

    We present a suite of gravitationally self-consistent predictions of sea-level change since Last Glacial Maximum (LGM) in the vicinity of the Bosphorus and Dardanelles straits that combine signals associated with glacial isostatic adjustment (GIA) and the flooding of the Black Sea. Our predictions are tuned to fit a relative sea level (RSL) record at the island of Samothrace in the north Aegean Sea and they include realistic 3-D variations in viscoelastic structure, including lateral variations in mantle viscosity and the elastic thickness of the lithosphere, as well as weak plate boundary zones. We demonstrate that 3-D Earth structure and the magnitude of the flood event (which depends on the pre-flood level of the lake) both have significant impact on the predicted RSL change at the location of the Bosphorus sill, and therefore on the inferred timing of the marine incursion. We summarize our results in a plot showing the predicted RSL change at the Bosphorus sill as a function of the timing of the flood event for different flood magnitudes up to 100 m. These results suggest, for example, that a flood event at 9 ka implies that the elevation of the sill was lowered through erosion by ∼14-21 m during, and after, the flood. In contrast, a flood event at 7 ka suggests erosion of ∼24-31 m at the sill since the flood. More generally, our results will be useful for future research aimed at constraining the details of this controversial, and widely debated geological event.

  13. Real time adjustment of slow changing flow components in distributed urban runoff models

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, M.; Mikkelsen, Peter Steen

    2011-01-01

    . This information is then used to update the states of the hydrological model. The method is demonstrated on the 20 km2 Danish urban catchment of Ballerup, which has substantial amount of infiltration inflow after succeeding rain events, for a very rainy period of 17 days in August 2010. The results show big......In many urban runoff systems infiltrating water contributes with a substantial part of the total inflow and therefore most urban runoff modelling packages include hydrological models for simulating the infiltrating inflow. This paper presents a method for deterministic updating of the hydrological...

  14. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  15. 基于需求响应的电动汽车集群充电负荷建模及容量边界控制策略%Study on Modeling of Aggregated Charging Load of Electric Vehicles and Control Strategy by Adjusting Capacity Boundaries Based on Demand Response

    Institute of Scientific and Technical Information of China (English)

    孙强; 许方园; 唐佳; 王丹; 罗凤章

    2016-01-01

    对电动汽车充电进行合理有序的控制是实现大规模电动汽车入网的关键。首先介绍了一种电动汽车集群的分层分区域的电动汽车控制架构。其次,建立了电动汽车集群的控制模型,并在此基础之上提出了一种通过改变电动汽车可调容量边界的电动汽车集群的需求响应控制策略。最后,通过算例分析,首先研究了模型中的参数对电动汽车集群充电负荷的影响,其次验证了文中提出的控制模型和策略,仿真结果表明,在满足用户用电需求的基础上,文中建立的电动汽车控制模型和控制策略能够很好地实现电动汽车集群需求响应的功能。%Reasonable and orderly control of electric vehicle charging is the key to realizelarge-scale electric vehiclesaccessing to power grid.Firstly,a layered and partitioned control framework of electric vehicle clusteris presented. Secondly, charging model of aggregated electric vehicles is established.Onthisbasis,a control strategyof aggregated electric vehicles for demand response is proposed.It adjustsEVs charging capacity boundaryto realize desired control. Finally, influence of model parameters is studied, and the control model and strategy are verified.Simulation results show that the control model and strategy can achieve objectiveof electric vehicle charging load without missing customerneed.

  16. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    Science.gov (United States)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  17. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  18. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  19. A new adjustable gains for second order sliding mode control of saturated DFIG-based wind turbine

    Science.gov (United States)

    Bounadja, E.; Djahbar, A.; Taleb, R.; Boudjema, Z.

    2017-02-01

    The control of Doubly-Fed induction generator (DFIG), used in wind energy conversion, has been given a great deal of interest. Frequently, this control has been dealt with ignoring the magnetic saturation effect in the DFIG model. The aim of the present work is twofold: firstly, the magnetic saturation effect is accounted in the control design model; secondly, a new second order sliding mode control scheme using adjustable-gains (AG-SOSMC) is proposed to control the DFIG via its rotor side converter. This scheme allows the independent control of the generated active and reactive power. Conventionally, the second order sliding mode control (SOSMC) applied to the DFIG, utilize the super-twisting algorithm with fixed gains. In the proposed AG-SOSMC, a simple means by which the controller can adjust its behavior is used. For that, a linear function is used to represent the variation in gain as a function of the absolute value of the discrepancy between the reference rotor current and its measured value. The transient DFIG speed response using the aforementioned characteristic is compared with the one determined by using the conventional SOSMC controller with fixed gains. Simulation results show, accurate dynamic performances, quicker transient response and more accurate control are achieved for different operating conditions.

  20. Adjustment and Characterization of an Original Model of Chronic Ischemic Heart Failure in Pig

    Directory of Open Access Journals (Sweden)

    Laurent Barandon

    2010-01-01

    Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.

  1. Investigating the prostate specific antigen, body mass index and age relationship: is an age-BMI-adjusted PSA model clinically useful?

    Science.gov (United States)

    Harrison, Sean; Tilling, Kate; Turner, Emma L; Lane, J Athene; Simpkin, Andrew; Davis, Michael; Donovan, Jenny; Hamdy, Freddie C; Neal, David E; Martin, Richard M

    2016-12-01

    Previous studies indicate a possible inverse relationship between prostate-specific antigen (PSA) and body mass index (BMI), and a positive relationship between PSA and age. We investigated the associations between age, BMI, PSA, and screen-detected prostate cancer to determine whether an age-BMI-adjusted PSA model would be clinically useful for detecting prostate cancer. Cross-sectional analysis nested within the UK ProtecT trial of treatments for localized cancer. Of 18,238 men aged 50-69 years, 9,457 men without screen-detected prostate cancer (controls) and 1,836 men with prostate cancer (cases) met inclusion criteria: no history of prostate cancer or diabetes; PSA BMI between 15 and 50 kg/m(2). Multivariable linear regression models were used to investigate the relationship between log-PSA, age, and BMI in all men, controlling for prostate cancer status. In the 11,293 included men, the median PSA was 1.2 ng/ml (IQR: 0.7-2.6); mean age 61.7 years (SD 4.9); and mean BMI 26.8 kg/m(2) (SD 3.7). There were a 5.1% decrease in PSA per 5 kg/m(2) increase in BMI (95% CI 3.4-6.8) and a 13.6% increase in PSA per 5-year increase in age (95% CI 12.0-15.1). Interaction tests showed no evidence for different associations between age, BMI, and PSA in men above and below 3.0 ng/ml (all p for interaction >0.2). The age-BMI-adjusted PSA model performed as well as an age-adjusted model based on National Institute for Health and Care Excellence (NICE) guidelines at detecting prostate cancer. Age and BMI were associated with small changes in PSA. An age-BMI-adjusted PSA model is no more clinically useful for detecting prostate cancer than current NICE guidelines. Future studies looking at the effect of different variables on PSA, independent of their effect on prostate cancer, may improve the discrimination of PSA for prostate cancer.

  2. Users Guide to SAMINT: A Code for Nuclear Data Adjustment with SAMMY Based on Integral Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sobes, Vladimir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arbanas, Goran [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-10-01

    The purpose of this project is to couple differential and integral data evaluation in a continuous-energy framework. More specifically, the goal is to use the Generalized Linear Least Squares methodology employed in TSURFER to update the parameters of a resolved resonance region evaluation directly. Recognizing that the GLLS methodology in TSURFER is identical to the mathematical description of the simple Bayesian updating carried out in SAMMY, the computer code SAMINT was created to help use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Minimal modifications of SAMMY are required when used with SAMINT to make resonance parameter updates based on integral experimental data.

  3. Using an adjusted Serfling regression model to improve the early warning at the arrival of peak timing of influenza in Beijing.

    Directory of Open Access Journals (Sweden)

    Xiaoli Wang

    Full Text Available Serfling-type periodic regression models have been widely used to identify and analyse epidemic of influenza. In these approaches, the baseline is traditionally determined using cleaned historical non-epidemic data. However, we found that the previous exclusion of epidemic seasons was empirical, since year-year variations in the seasonal pattern of activity had been ignored. Therefore, excluding fixed 'epidemic' months did not seem reasonable. We made some adjustments in the rule of epidemic-period removal to avoid potentially subjective definition of the start and end of epidemic periods. We fitted the baseline iteratively. Firstly, we established a Serfling regression model based on the actual observations without any removals. After that, instead of manually excluding a predefined 'epidemic' period (the traditional method, we excluded observations which exceeded a calculated boundary. We then established Serfling regression once more using the cleaned data and excluded observations which exceeded a calculated boundary. We repeated this process until the R2 value stopped to increase. In addition, the definitions of the onset of influenza epidemic were heterogeneous, which might make it impossible to accurately evaluate the performance of alternative approaches. We then used this modified model to detect the peak timing of influenza instead of the onset of epidemic and compared this model with traditional Serfling models using observed weekly case counts of influenza-like illness (ILIs, in terms of sensitivity, specificity and lead time. A better performance was observed. In summary, we provide an adjusted Serfling model which may have improved performance over traditional models in early warning at arrival of peak timing of influenza.

  4. Using an adjusted Serfling regression model to improve the early warning at the arrival of peak timing of influenza in Beijing.

    Science.gov (United States)

    Wang, Xiaoli; Wu, Shuangsheng; MacIntyre, C Raina; Zhang, Hongbin; Shi, Weixian; Peng, Xiaomin; Duan, Wei; Yang, Peng; Zhang, Yi; Wang, Quanyi

    2015-01-01

    Serfling-type periodic regression models have been widely used to identify and analyse epidemic of influenza. In these approaches, the baseline is traditionally determined using cleaned historical non-epidemic data. However, we found that the previous exclusion of epidemic seasons was empirical, since year-year variations in the seasonal pattern of activity had been ignored. Therefore, excluding fixed 'epidemic' months did not seem reasonable. We made some adjustments in the rule of epidemic-period removal to avoid potentially subjective definition of the start and end of epidemic periods. We fitted the baseline iteratively. Firstly, we established a Serfling regression model based on the actual observations without any removals. After that, instead of manually excluding a predefined 'epidemic' period (the traditional method), we excluded observations which exceeded a calculated boundary. We then established Serfling regression once more using the cleaned data and excluded observations which exceeded a calculated boundary. We repeated this process until the R2 value stopped to increase. In addition, the definitions of the onset of influenza epidemic were heterogeneous, which might make it impossible to accurately evaluate the performance of alternative approaches. We then used this modified model to detect the peak timing of influenza instead of the onset of epidemic and compared this model with traditional Serfling models using observed weekly case counts of influenza-like illness (ILIs), in terms of sensitivity, specificity and lead time. A better performance was observed. In summary, we provide an adjusted Serfling model which may have improved performance over traditional models in early warning at arrival of peak timing of influenza.

  5. A bidirection-adjustable ionic current rectification system based on a biconical micro-channel.

    Science.gov (United States)

    Chang, Fengxia; Chen, Cheng; Xie, Xia; Chen, Lisha; Li, Meixian; Zhu, Zhiwei

    2015-10-25

    We developed a simple, cheap and bidirectional ionic current rectification system based on the integration of a biconical micro-channel, working electrode and reference electrode. This system may have potential and realistic future value for studying two-way ionic transport in the cell membrane.

  6. Evaluating the Investment Benefit of Multinational Enterprises' International Projects Based on Risk Adjustment: Evidence from China

    Science.gov (United States)

    Chen, Chong

    2016-01-01

    This study examines the international risks faced by multinational enterprises to understand their impact on the evaluation of investment projects. Moreover, it establishes a 'three-dimensional' theoretical framework of risk identification to analyse the composition of international risk indicators of multinational enterprises based on the theory…

  7. Performance Evaluation of Electronic Inductor-Based Adjustable Speed Drives with Respect to Line Current Interharmonics

    DEFF Research Database (Denmark)

    Soltani, Hamid; Davari, Pooya; Zare, Firuz;

    2017-01-01

    attractive due to its improved harmonic performance compared to a conventional ASD. In this digest, the input currents of the EI-based ASD are investigated and compared with the conventional ASDs with respect to interharmonics, which is an emerging power quality topic. First, the main causes...

  8. Evaluating the Investment Benefit of Multinational Enterprises' International Projects Based on Risk Adjustment: Evidence from China

    Science.gov (United States)

    Chen, Chong

    2016-01-01

    This study examines the international risks faced by multinational enterprises to understand their impact on the evaluation of investment projects. Moreover, it establishes a 'three-dimensional' theoretical framework of risk identification to analyse the composition of international risk indicators of multinational enterprises based on the theory…

  9. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Science.gov (United States)

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942

  10. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Directory of Open Access Journals (Sweden)

    Kwang Cheol Shin

    2009-02-01

    Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  11. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    Science.gov (United States)

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  12. Adjustment of Homeless Adolescents to a Crisis Shelter: Application of a Stress and Coping Model.

    Science.gov (United States)

    Dalton, Melanie M.; Pakenham, Kenneth I.

    2002-01-01

    Examined the usefulness of a stress and coping model of adaptation to a homeless shelter among 78 homeless adolescents who were interviewed and completed measures at shelter entrance and discharge. After controlling for relevant background variables, measures of coping resources, appraisal, and coping strategies showed relations with measures of…

  13. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    Science.gov (United States)

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  14. An improved canopy wind model for predicting wind adjustment factors and wildland fire behavior

    Science.gov (United States)

    W. J. Massman; J. M. Forthofer; M. A. Finney

    2017-01-01

    The ability to rapidly estimate wind speed beneath a forest canopy or near the ground surface in any vegetation is critical to practical wildland fire behavior models. The common metric of this wind speed is the "mid-flame" wind speed, UMF. However, the existing approach for estimating UMF has some significant shortcomings. These include the assumptions that...

  15. The linear quadratic adjustment cost model and the demand for labour

    DEFF Research Database (Denmark)

    Engsted, Tom; Haldrup, Niels

    1994-01-01

    Der udvikles en ny metode til estimation og test af den lineære kvadratiske tilpasningsomkostningsmodel når de underliggende tidsserier er ikke-stationære, og metoden anvendes til modellering af arbejdskraftefterspørgslen i danske industrisektorer....

  16. Self-Tuning Insulin Adjustment Algorithm for Type 1 Diabetic Patients based on Multi-Doses Regime

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2005-01-01

    Full Text Available A self-tuning algorithm is presented for on-line insulin dosage adjustment in type 1 diabetic patients (chronic stage. The algorithm suggested does not need information of the patient insulin–glucose dynamics (model-free. Three doses are programmed daily, where a combination of two types of insulin: rapid/short and intermediate/long acting is injected into the patient through a subcutaneous route. The doses adaptation is performed by reducing the error in the blood glucose level from euglycemics. In this way, a total of five doses are tuned per day: three rapid/short and two intermediate/long, where there is large penalty to avoid hypoglycemic scenarios. Closed-loop simulation results are illustrated using a detailed nonlinear model of the subcutaneous insulin–glucose dynamics in a type 1 diabetic patient with meal intake.

  17. Emergy-Based Adjustment of the Agricultural Structure in a Low-Carbon Economy in Manas County of China

    Directory of Open Access Journals (Sweden)

    Sergio Ulgiati

    2011-09-01

    Full Text Available The emergy concept, integrated with a multi-objective linear programming method, was used to model the agricultural structure of Xinjiang Uygur Autonomous Region under the consideration of the need to develop a low-carbon economy. The emergy indices before and after the structural optimization were evaluated. In the reconstructed model, the proportions of agriculture, forestry and artificial grassland should be adjusted from 19:2:1 to 5.2:1:2.5; the Emergy Yield Ratio (1.48 was higher than the average local (0.49 and national levels (0.27; and the Emergy Investment Ratio (11.1 was higher than the current structure (4.93 and that obtained from the 2003 data (0.055 in Xinjiang Uygur Autonomous Region, the Water Emergy Cost (0.055 should be reduced compared to that before the adjustment (0.088. The measurement of all the parameters validated the positive impact of the modeled agricultural structure. The self-sufficiency ratio of the system increased from the original level of 0.106 to 0.432, which indicated a better coupling effect among the subsystems within the whole system. The comparative advantage index between the two systems before and after optimization was approximately 2:1. When the mountain ecosystem service value was considered, excessive animal husbandry led to a 1.41 × 1010 RMB·a−1 indirect economic loss, which was 4.15 times the GDP during the same time period. The functional improvement of the modeled structure supports the plan to “construct a central oasis and protect the surrounding mountains and deserts” to develop a sustainable agricultural system. Conserved natural grassland can make a large contribution to the carbon storage; and therefore, it is wise alternative that promote a low-carbon economic development strategy.

  18. First Year Student Adjustment, Success, and Retention: Structural Models of Student Persistence Using Electronic Portfolios

    Science.gov (United States)

    Sandler, Martin E.

    2010-01-01

    This study explores the deployment of electronic portfolios to a university-wide cohort of freshman undergraduates that included a subgroup of at-risk and lower academically prepared learners. Five evaluative dimensions based on persistence and engagement theory were included in the development of four assessment rubrics exploring goal clarity,…

  19. JOINT ALIGNMENT OF UNDERWATER AND ABOVE-THE-WATER PHOTOGRAMMETRIC 3D MODELS BY INDEPENDENT MODELS ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    F. Menna

    2015-04-01

    Full Text Available The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air. A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1. The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition.

  20. Pervasive Computing Location-aware Model Based on Ontology

    Institute of Scientific and Technical Information of China (English)

    PU Fang; CAI Hai-bin; CAO Qi-ying; SUN Dao-qing; LI Tong

    2008-01-01

    In order to integrate heterogeneous location-aware systems into pervasive computing environment, a novel pervasive computing location-aware model based on ontology is presented. A location-aware model ontology (LMO) is constructed. The location-aware model has the capabilities of sharing knowledge, reasoning and adjusting the usage policies of services dynamically through a unified semantic location manner. At last, the work process of our proposed location-aware model is explained by an application scenario.

  1. Research on Modeling of Hydropneumatic Suspension Based on Fractional Order

    OpenAIRE

    Junwei Zhang; Sizhong Chen; Yuzhuang Zhao; Jianbo Feng; Chang Liu; Ying Fan

    2015-01-01

    With such excellent performance as nonlinear stiffness, adjustable vehicle height, and good vibration resistance, hydropneumatic suspension (HS) has been more and more applied to heavy vehicle and engineering vehicle. Traditional modeling methods are still confined to simple models without taking many factors into consideration. A hydropneumatic suspension model based on fractional order (HSM-FO) is built with the advantage of fractional order (FO) in viscoelastic material modeling considerin...

  2. Modeling of multivariate longitudinal phenotypes in family genetic studies with Bayesian multiplicity adjustment

    OpenAIRE

    Ding, Lili; Kurowski, Brad G; He, Hua; Alexander, Eileen S.; Mersha, Tesfaye B.; Fardo, David W.; Zhang, Xue; Pilipenko, Valentina V; Kottyan, Leah; Martin, Lisa J.

    2014-01-01

    Genetic studies often collect data on multiple traits. Most genetic association analyses, however, consider traits separately and ignore potential correlation among traits, partially because of difficulties in statistical modeling of multivariate outcomes. When multiple traits are measured in a pedigree longitudinally, additional challenges arise because in addition to correlation between traits, a trait is often correlated with its own measures over time and with measurements of other family...

  3. Adjustment of carbon fluxes to light conditions regulates the daily turnover of starch in plants: a computational model.

    Science.gov (United States)

    Pokhilko, Alexandra; Flis, Anna; Sulpice, Ronan; Stitt, Mark; Ebenhöh, Oliver

    2014-03-04

    In the light, photosynthesis provides carbon for metabolism and growth. In the dark, plant growth depends on carbon reserves that were accumulated during previous light periods. Many plants accumulate part of their newly-fixed carbon as starch in their leaves in the day and remobilise it to support metabolism and growth at night. The daily rhythms of starch accumulation and degradation are dynamically adjusted to the changing light conditions such that starch is almost but not totally exhausted at dawn. This requires the allocation of a larger proportion of the newly fixed carbon to starch under low carbon conditions, and the use of information about the carbon status at the end of the light period and the length of the night to pace the rate of starch degradation. This regulation occurs in a circadian clock-dependent manner, through unknown mechanisms. We use mathematical modelling to explore possible diurnal mechanisms regulating the starch level. Our model combines the main reactions of carbon fixation, starch and sucrose synthesis, starch degradation and consumption of carbon by sink tissues. To describe the dynamic adjustment of starch to daily conditions, we introduce diurnal regulators of carbon fluxes, which modulate the activities of the key steps of starch metabolism. The sensing of the diurnal conditions is mediated in our model by the timer α and the "dark sensor"β, which integrate daily information about the light conditions and time of the day through the circadian clock. Our data identify the β subunit of SnRK1 kinase as a good candidate for the role of the dark-accumulated component β of our model. The developed novel approach for understanding starch kinetics through diurnal metabolic and circadian sensors allowed us to explain starch time-courses in plants and predict the kinetics of the proposed diurnal regulators under various genetic and environmental perturbations.

  4. A data-driven model for constraint of present-day glacial isostatic adjustment in North America

    Science.gov (United States)

    Simon, K. M.; Riva, R. E. M.; Kleinherenbrink, M.; Tangdamrongsub, N.

    2017-09-01

    Geodetic measurements of vertical land motion and gravity change are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) in North America via least-squares adjustment. The result is an updated GIA model wherein the final predicted signal is informed by both observational data, and prior knowledge (or intuition) of GIA inferred from models. The data-driven method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. In order to assess the influence each dataset has on the final GIA prediction, the vertical land motion and GRACE-measured gravity data are incorporated into the model first independently (i.e., one dataset only), then simultaneously. The relative weighting of the datasets and the prior input is iteratively determined by variance component estimation in order to achieve the most statistically appropriate fit to the data. The best-fit model is obtained when both datasets are inverted and gives respective RMS misfits to the GPS and GRACE data of 1.3 mm/yr and 0.8 mm/yr equivalent water layer change. Non-GIA signals (e.g., hydrology) are removed from the datasets prior to inversion. The post-fit residuals between the model predictions and the vertical motion and gravity datasets, however, suggest particular regions where significant non-GIA signals may still be present in the data, including unmodeled hydrological changes in the central Prairies west of Lake Winnipeg. Outside of these regions of misfit, the posterior uncertainty of the predicted model provides a measure of the formal uncertainty associated with the GIA process; results indicate that this quantity is sensitive to the uncertainty and spatial distribution of the input data as well as that of the prior model information. In the study area, the predicted uncertainty of the present-day GIA signal ranges from ∼0.2-1.2 mm/yr for rates of vertical land motion, and

  5. A Model of Appropriate Self-Adjustment of Farmers who Grow Para Rubber (Hevea brasiliensis in Northeast Thailand

    Directory of Open Access Journals (Sweden)

    Montri Srirajlao

    2010-01-01

    Full Text Available Problem statement: Para Rubber was an economic wood growing in Northeast Thailand playing economic and social role. The objectives of this research were to study: (1 the economic, social and cultural lifestyle and (2 the appropriate adjustment model of agriculturists or farmers growing Para Rubber in Northeast Thailand. Approach: The research area covered 6 provinces: Mahasarakam, Roi-ed, Khon Kaen, Nongkai, Udontani and Loei. The samples were selected by Purposive Sampling including: 90 experts, 60 practitioners and 60 general people. The instruments using for collecting data were: (1 The Interview Form, (2 The Observation Form, (3 Focus Group Discussion and (4 Workshop, investigated by Triangulation. Data were analyzed according to the specified objectives and presented in descriptive analysis. Results: The farmers' lifestyle in traditional period of Northeast Thailand was to earn their living from producing by themselves and sharing resources with each other including: rice farming, farm rice growing, vegetable garden growing, searching for natural food without cost wasting one's capital. When it was period of changing, the price of traditional industrial crop was lowered, the agriculturists began to grow Para Rubber instead since the promotion of governmental industrial section. For the economic, social and cultural changes, found that the agriculturists growing Para Rubber Plantation, had more revenue. But, the mechanism of market price and selling had stability was attached with political situation. For the pattern of adjustment of the agriculturists growing Para Rubber Plantation in Northeast Thailand, found that there was an adjustment in individual level for developing their self study by applying body of knowledge learned by experience of successful people by being employed in cutting Para Rubber in The Southern of Thailand as well as the academic support and selling to serve the need of farmers. Conclusion/Recommendations: Para Rubber

  6. Assessment of regression models for adjustment of iron status biomarkers for inflammation in children with moderate acute malnutrition in Burkina Faso

    DEFF Research Database (Denmark)

    Cichon, Bernardette; Ritz, Christian; Fabiansen, Christian

    2017-01-01

    measured in serum. Generalized additive, quadratic, and linear models were used to model the relation between SF and sTfR as outcomes and CRP and AGP as categorical variables (model 1; equivalent to the CF approach), CRP and AGP as continuous variables (model 2), or CRP and AGP as continuous variables......: Crossvalidation revealed no advantage to using generalized additive or quadratic models over linear models in terms of the RMSE. Linear model 3 performed better than models 2 and 1. Furthermore, we found no difference in CFs for adjusting SF and those from a previous meta-analysis. Adjustment of SF and s...... of inflammation into account. In clinical settings, the CF approach may be more practical. There is no benefit from adjusting sTfR. This trial was registered at www.controlled-trials.com as ISRCTN42569496....

  7. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566

  8. Salary adjustments

    CERN Multimedia

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566

  9. Models of Quality-Adjusted Life Years when Health varies over Time: Survey and Analysis

    DEFF Research Database (Denmark)

    Hansen, Kristian Schultz; Østerdal, Lars Peter

    2006-01-01

    time trade-off (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. This discussion includes questioning the SG method as the gold standard of the health state index, re-examining the role...... of the constant-proportional trade-off condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken...

  10. Models of quality-adjusted life years when health varies over time

    DEFF Research Database (Denmark)

    Hansen, Kristian Schultz; Østerdal, Lars Peter Raahave

    2006-01-01

    time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... the SG method as the gold standard for estimation of the health state index, reexamining the role of the constantproportional tradeoff condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination...

  11. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    Science.gov (United States)

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  12. Autonomous parameter adjustment for SSVEP-based BCIs with a novel BCI Wizard

    Directory of Open Access Journals (Sweden)

    Felix eGembler

    2015-12-01

    Full Text Available Brain-computer interfaces (BCIs transfer human brain activities into computer commands and enable a communication channel without requiring movement.Among other BCI approaches, steady-state visual evoked potential (SSVEP-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities.Those systems require customization of different kinds of parameters (e.g. stimulation frequencies. Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency. Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase ``RHINE WAAL UNIVERSITY'' with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD accuracy of 97.14 (3.73% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy.

  13. Adjustable Two-Tier Cache for IPTV Based on Segmented Streaming

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liang

    2012-01-01

    Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.

  14. Devising a model brand loyalty in tires industry: the adjustment role of customer perceived value

    Directory of Open Access Journals (Sweden)

    Davoud Feiz

    2015-06-01

    Full Text Available Today, brand discussion is highly considered by companies and market agents. Different factors such as customers’ loyalty to brand impact on brand and the increase in sale and profit. Present paper aims at studying the impact of brand experience, trust and satisfaction on brand loyalty to Barez Tire Company in the city of Kerman as well as providing a model for this case. Research population consists of all Barez Tire consumers in Kerman. The volume of the sample was 171 for which simple random sampling was used. Data collection tool was a standard questionnaire and for measuring its reliability, Chronbach’s alpha was used. Present research is an applied one in terms of purpose and it is a descriptive and correlative one in terms of acquiring needed data. To analyze data, confirmatory factor analysis (CFA and structural equation model (SEM in SPSS and LISREL software were used. The findings indicate that brand experience, brand trust, and brand satisfaction impact on brand loyalty to Barez Tire Brand in the city of Kerman significantly. Noteworthy, the impact of these factors is higher when considering the role of the perceived value moderator.

  15. Wavelet-Based Color Pathological Image Watermark through Dynamically Adjusting the Embedding Intensity

    Directory of Open Access Journals (Sweden)

    Guoyan Liu

    2012-01-01

    Full Text Available This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT. The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n×n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping.

  16. Wavelet-based color pathological image watermark through dynamically adjusting the embedding intensity.

    Science.gov (United States)

    Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman

    2012-01-01

    This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping.

  17. Tunable fluorescence enhancement based on bandgap-adjustable 3D Fe3O4 nanoparticles

    Science.gov (United States)

    Hu, Fei; Gao, Suning; Zhu, Lili; Liao, Fan; Yang, Lulu; Shao, Mingwang

    2016-06-01

    Great progress has been made in fluorescence-based detection utilizing solid state enhanced substrates in recent years. However, it is still difficult to achieve reliable substrates with tunable enhancement factors. The present work shows liquid fluorescence enhanced substrates consisting of suspensions of Fe3O4 nanoparticles (NPs), which can assemble 3D photonic crystal under the external magnetic field. The photonic bandgap induced by the equilibrium of attractive magnetic force and repulsive electrostatic force between adjacent Fe3O4 NPs is utilized to enhance fluorescence intensity of dye molecules (including R6G, RB, Cy5, DMTPS-DCV) in a reversible and controllable manner. The results show that a maximum of 12.3-fold fluorescence enhancement is realized in the 3D Fe3O4 NP substrates without the utilization of metal particles for PCs/DMTPS-DCV (1.0 × 10-7 M, water fraction (f w) = 90%).

  18. Sensitivity of palaeotidal models of the northwest European shelf seas to glacial isostatic adjustment since the Last Glacial Maximum

    Science.gov (United States)

    Ward, Sophie L.; Neill, Simon P.; Scourse, James D.; Bradley, Sarah L.; Uehara, Katsuto

    2016-11-01

    The spatial and temporal distribution of relative sea-level change over the northwest European shelf seas has varied considerably since the Last Glacial Maximum, due to eustatic sea-level rise and a complex isostatic response to deglaciation of both near- and far-field ice sheets. Because of the complex pattern of relative sea level changes, the region is an ideal focus for modelling the impact of significant sea-level change on shelf sea tidal dynamics. Changes in tidal dynamics influence tidal range, the location of tidal mixing fronts, dissipation of tidal energy, shelf sea biogeochemistry and sediment transport pathways. Significant advancements in glacial isostatic adjustment (GIA) modelling of the region have been made in recent years, and earlier palaeotidal models of the northwest European shelf seas were developed using output from less well-constrained GIA models as input to generate palaeobathymetric grids. We use the most up-to-date and well-constrained GIA model for the region as palaeotopographic input for a new high resolution, three-dimensional tidal model (ROMS) of the northwest European shelf seas. With focus on model output for 1 ka time slices from the Last Glacial Maximum (taken as being 21 ka BP) to present day, we demonstrate that spatial and temporal changes in simulated tidal dynamics are very sensitive to relative sea-level distribution. The new high resolution palaeotidal model is considered a significant improvement on previous depth-averaged palaeotidal models, in particular where the outputs are to be used in sediment transport studies, where consideration of the near-bed stress is critical, and for constraining sea level index points.

  19. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Science.gov (United States)

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  20. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Directory of Open Access Journals (Sweden)

    Casoli Pierre

    2016-01-01

    Full Text Available CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  1. Adjustment of mathematical models and quality of soybean grains in the drying with high temperatures

    Directory of Open Access Journals (Sweden)

    Paulo C. Coradi

    2016-04-01

    Full Text Available ABSTRACT The aim of this study was to evaluate the influence of the initial moisture content of soybeans and the drying air temperatures on drying kinetics and grain quality, and find the best mathematical model that fit the experimental data of drying, effective diffusivity and isosteric heat of desorption. The experimental design was completely randomized (CRD, with a factorial scheme (4 x 2, four drying temperatures (75, 90, 105 and 120 ºC and two initial moisture contents (25 and 19% d.b., with three replicates. The initial moisture content of the product interferes with the drying time. The model of Wang and Singh proved to be more suitable to describe the drying of soybeans to temperature ranges of the drying air of 75, 90, 105 and 120 °C and initial moisture contents of 19 and 25% (d.b.. The effective diffusivity obtained from the drying of soybeans was higher (2.5 x 10-11 m2 s-1 for a temperature of 120 °C and water content of 25% (d.b.. Drying of soybeans at higher temperatures (above 105 °C and higher initial water content (25% d.b. also increases the amount of energy (3894.57 kJ kg-1, i.e., the isosteric heat of desorption necessary to perform the process. Drying air temperature and different initial moisture contents affected the quality of soybean along the drying time (electrical conductivity of 540.35 µS cm-1g-1; however, not affect the final yield of the oil extracted from soybean grains (15.69%.

  2. Fast cloud adjustment to increasing CO2 in a superparameterized climate model

    Science.gov (United States)

    Wyant, Matthew C.; Bretherton, Christopher S.; Blossey, Peter N.; Khairoutdinov, Marat

    2012-05-01

    Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N) cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values. The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL) over the cooler subtropical oceans. One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo. Two-dimensional cloud-resolving model (CRM) simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.

  3. From skin to bulk: An adjustment technique for assimilation of satellite-derived temperature observations in numerical models of small inland water bodies

    Science.gov (United States)

    Javaheri, Amir; Babbar-Sebens, Meghna; Miller, Robert N.

    2016-06-01

    Data Assimilation (DA) has been proposed for multiple water resources studies that require rapid employment of incoming observations to update and improve accuracy of operational prediction models. The usefulness of DA approaches in assimilating water temperature observations from different types of monitoring technologies (e.g., remote sensing and in-situ sensors) into numerical models of in-land water bodies (e.g., lakes and reservoirs) has, however, received limited attention. In contrast to in-situ temperature sensors, remote sensing technologies (e.g., satellites) provide the benefit of collecting measurements with better X-Y spatial coverage. However, assimilating water temperature measurements from satellites can introduce biases in the updated numerical model of water bodies because the physical region represented by these measurements do not directly correspond with the numerical model's representation of the water column. This study proposes a novel approach to address this representation challenge by coupling a skin temperature adjustment technique based on available air and in-situ water temperature observations, with an ensemble Kalman filter based data assimilation technique. Additionally, the proposed approach used in this study for four-dimensional analysis of a reservoir provides reasonably accurate surface layer and water column temperature forecasts, in spite of the use of a fairly small ensemble. Application of the methodology on a test site - Eagle Creek Reservoir - in Central Indiana demonstrated that assimilation of remotely sensed skin temperature data using the proposed approach improved the overall root mean square difference between modeled surface layer temperatures and the adjusted remotely sensed skin temperature observations from 5.6°C to 0.51°C (i.e., 91% improvement). In addition, the overall error in the water column temperature predictions when compared with in-situ observations also decreased from 1.95°C (before assimilation

  4. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  5. Two models to compute an adjusted Green Vegetation Fraction taking into account the spatial variability of soil NDVI

    Science.gov (United States)

    Montandon, L. M.; Small, E.

    2008-12-01

    The green vegetation fraction (Fg) is an important climate and hydrologic model parameter. The commonly- used Fg model is a simple linear mixing of two NDVI end-members: bare soil NDVI (NDVIo) and full vegetation NDVI (NDVI∞). NDVI∞ is generally set as a percentile of the historical maximum NDVI for each land cover. This approach works well for areas where Fg reaches full cover (100%). Because many biomes do not reach Fg=0, however, NDVIo is often determined as a single invariant value for all land cover types. In general, it is selected among the lowest NDVI observed over bare or desert areas, yielding NDVIo close to zero. There are two issues with this approach: large-scale variability of soil NDVI is ignored and observations on a wide range of soils show that soil NDVI is often larger. Here we introduce and test two new approaches to compute Fg that takes into account the spatial variability of soil NDVI. The first approach uses a global soil NDVI database and time series of MODIS NDVI data over the conterminous United States to constrain possible soil NDVI values over each pixel. Fg is computed using a subset of the soils database that respects the linear mixing model condition NDVIo≤NDVIh, where NDVIh is the pixel historical minimum. The second approach uses an empirical soil NDVI model that combines information of soil organic matter content and texture to infer soil NDVI. The U.S. General Soil Map (STATSGO2) database is used as input for spatial soil properties. Using in situ measurements of soil NDVI from sites that span a range of land cover types, we test both models and compare their performance to the standard Fg model. We show that our models adjust the temporal Fg estimates by 40-90% depending on the land cover type and amplitude of the seasonal NDVI signal. Using MODIS NDVI and soil maps over the conterminous U.S., we also study the spatial distribution of Fg adjustments in February and June 2008. We show that the standard Fg method

  6. Fast Cloud Adjustment to Increasing CO2 in a Superparameterized Climate Model

    Directory of Open Access Journals (Sweden)

    Marat Khairoutdinov

    2012-05-01

    Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.

  7. Signal Amplification in Field Effect-Based Sandwich Enzyme-Linked Immunosensing by Tuned Buffer Concentration with Ionic Strength Adjuster.

    Science.gov (United States)

    Kumar, Satyendra; Kumar, Narendra; Panda, Siddhartha

    2016-04-01

    Miniaturization of the sandwich enzyme-based immunosensor has several advantages but could result in lower signal strength due to lower enzyme loading. Hence, technologies for amplification of the signal are needed. Signal amplification in a field effect-based electrochemical immunosensor utilizing chip-based ELISA is presented in this work. First, the molarities of phosphate buffer saline (PBS) and concentrations of KCl as ionic strength adjuster were optimized to maximize the GOx glucose-based enzymatic reactions in a beaker for signal amplification measured by change in the voltage shift with an EIS device (using 20 μl of solution) and validated with a commercial pH meter (using 3 ml of solution). The PBS molarity of 100 μM with 25 mM KCl provided the maximum voltage shift. These optimized buffer conditions were further verified for GOx immobilized on silicon chips, and similar trends with decreased PBS molarity were obtained; however, the voltage shift values obtained on chip reaction were lower as compared to the reactions occurring in the beaker. The decreased voltage shift with immobilized enzyme on chip could be attributed to the increased Km (Michaelis-Menten constant) values in the immobilized GOx. Finally, a more than sixfold signal enhancement (from 8 to 47 mV) for the chip-based sandwich immunoassay was obtained by altering the PBS molarity from 10 to 100 μM with 25 mM KCl.

  8. Modeling neuroendocrine stress reactivity in salivary cortisol: adjusting for peak latency variability.

    Science.gov (United States)

    Lopez-Duran, Nestor L; Mayer, Stefanie E; Abelson, James L

    2014-07-01

    In this report, we present growth curve modeling (GCM) with landmark registration as an alternative statistical approach for the analysis of time series cortisol data. This approach addresses an often-ignored but critical source of variability in salivary cortisol analyses: individual and group differences in the time latency of post-stress peak concentrations. It allows for the simultaneous examination of cortisol changes before and after the peak while controlling for timing differences, and thus provides additional information that can help elucidate group differences in the underlying biological processes (e.g., intensity of response, regulatory capacity). We tested whether GCM with landmark registration is more sensitive than traditional statistical approaches (e.g., repeated measures ANOVA--rANOVA) in identifying sex differences in salivary cortisol responses to a psychosocial stressor (Trier Social Stress Test--TSST) in healthy adults (mean age 23). We used plasma ACTH measures as our "standard" and show that the new approach confirms in salivary cortisol the ACTH finding that males had longer peak latencies, higher post-stress peaks but a more intense post-peak decline. This finding would have been missed if only saliva cortisol was available and only more traditional analytic methods were used. This new approach may provide neuroendocrine researchers with a highly sensitive complementary tool to examine the dynamics of the cortisol response in a way that reduces risk of false negative findings when blood samples are not feasible.

  9. On the Accuracy of Glacial Isostatic Adjustment Models for Geodetic Observations to Estimate Arctic Ocean Sea-Level Change

    Directory of Open Access Journals (Sweden)

    Zhenwei Huang

    2013-01-01

    Full Text Available Arctic Ocean sea-level change is an important indicator of climate change. Contemporary geodetic observations, including data from tide gages, satellite altimetry and Gravity Recovery and Climate Experiment (GRACE, are sensitive to the effect of the ongoing glacial isostatic adjustment (GIA process. To fully exploit these geodetic observations to study climate related sea-level change, this GIA effect has to be removed. However, significant uncertainty exists with regard to the GIA model, and using different GIA models could lead to different results. In this study we use an ensemble of 14 contemporary GIA models to investigate their differences when they are applied to the above-mentioned geodetic observations to estimate sea-level change in the Arctic Ocean. We find that over the Arctic Ocean a large range of differences exists in GIA models when they are used to remove GIA effect from tide gage and GRACE observations, but with a relatively smaller range for satellite altimetry observations. In addition, we compare the derived sea-level trend from observations after applying different GIA models in the study regions, sea-level trend estimated from long-term tide gage data shows good agreement with altimetry result over the same data span. However the mass component of sea-level change obtained from GRACE data does not agree well with the result derived from steric-corrected altimeter observation due primarily to the large uncertainty of GIA models, errors in the Arctic Ocean altimetry or steric measurements, inadequate data span, or all of the above. We conclude that GIA correction is critical for studying sea-level change over the Arctic Ocean and further improvement in GIA modelling is needed to reduce the current discrepancies among models.

  10. Experimental and modelling characterisation of adjustable hollow Micro-needle delivery systems.

    Science.gov (United States)

    Liu, Ting-Ting; Chen, Kai; Pan, Min

    2017-09-06

    Hollow micro-needles have been used increasingly less in practice because the infusion into the skin is limited by the tissue resistance to flow. The relationship between the infusion flow rate and tissue resistance pressure is not clear. A custom-made, hollow micro-needle system was used in this study. The driving force and infusion flow rate were measured using a force transducer attached to an infusion pump. Evans blue dye was injected into the air, polyacrylamide gel and in-vivo mouse skin at different flow rates. Two different micro-needle lengths were used for in-vivo infusion into the mouse. A model was derived to calculate the driving force of the micro-needle infusion into the air, and the results were compared to experimental data. The calculated driving forces match the experimental results with different infusion flow rates. The pressure loss throughout the micro-needle delivery system was found to be two orders smaller than the resistance pressure inside the gel and mouse skin, and the resistance pressure increased with increasing flow rate. A portion of liquid backflow was observed when the flow rate was relatively larger, and the backflow was associated with a sudden larger increase in resistance pressure at a higher flow rate. The current micro-needle delivery system is capable of administering liquid into the mouse skin at a flow rate of up to 0.15 ml/min, without causing significant backflow on the surface. The resistance pressure increases with increasing flow rate, causing infusion restriction at higher flow rates. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Adjustment of Dysregulated Ceramide Metabolism in a Murine Model of Sepsis-Induced Cardiac Dysfunction.

    Science.gov (United States)

    Chung, Ha-Yeun; Kollmey, Anna S; Schrepper, Andrea; Kohl, Matthias; Bläss, Markus F; Stehr, Sebastian N; Lupp, Amelie; Gräler, Markus H; Claus, Ralf A

    2017-04-15

    Cardiac dysfunction, in particular of the left ventricle, is a common and early event in sepsis, and is strongly associated with an increase in patients' mortality. Acid sphingomyelinase (SMPD1)-the principal regulator for rapid and transient generation of the lipid mediator ceramide-is involved in both the regulation of host response in sepsis as well as in the pathogenesis of chronic heart failure. This study determined the degree and the potential role to which SMPD1 and its modulation affect sepsis-induced cardiomyopathy using both genetically deficient and pharmacologically-treated animals in a polymicrobial sepsis model. As surrogate parameters of sepsis-induced cardiomyopathy, cardiac function, markers of oxidative stress as well as troponin I levels were found to be improved in desipramine-treated animals, desipramine being an inhibitor of ceramide formation. Additionally, ceramide formation in cardiac tissue was dysregulated in SMPD1(+/+) as well as SMPD1(-/-) animals, whereas desipramine pretreatment resulted in stable, but increased ceramide content during host response. This was a result of elevated de novo synthesis. Strikingly, desipramine treatment led to significantly improved levels of surrogate markers. Furthermore, similar results in desipramine-pretreated SMPD1(-/-) littermates suggest an SMPD1-independent pathway. Finally, a pattern of differentially expressed transcripts important for regulation of apoptosis as well as antioxidative and cytokine response supports the concept that desipramine modulates ceramide formation, resulting in beneficial myocardial effects. We describe a novel, protective role of desipramine during sepsis-induced cardiac dysfunction that controls ceramide content. In addition, it may be possible to modulate cardiac function during host response by pre-conditioning with the Food and Drug Administration (FDA)-approved drug desipramine.

  12. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  13. Reaction-contingency based bipartite Boolean modelling

    Science.gov (United States)

    2013-01-01

    Background Intracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts. Results Here, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description. Conclusions Taken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation. PMID:23835289

  14. Scalability of a Methodology for Generating Technical Trading Rules with GAPs Based on Risk-Return Adjustment and Incremental Training

    Science.gov (United States)

    de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.

    In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.

  15. Initial pre-stress finding procedure and structural performance research for Levy cable dome based on linear adjustment theory

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The cable-strut structural system is statically and kinematically indeterminate. The initial pre-stress is a key factor for determining the shape and load carrying capacity. A new numerical algorithm is presented herein for the initial pre-stress finding procedure of complete cable-strut assembly. This method is based on the linear adjustment theory and does not take into account the material behavior. By using this method, the initial pre-stress of the multi self-stress modes can be found easily and the calculation process is simplified and efficient also. Finally, the initial pre-stress and structural performances of a particular Levy cable dome are analyzed comprehensively. The algorithm has proven to be efficient and correct, and the numerical results are valuable for practical design of Levy cable dome.

  16. APTZ Adjustment Based on LFPL Under Constant Illumination Condition%恒定光照下基于LFPL的APTZ调节

    Institute of Scientific and Technical Information of China (English)

    苏洁; 印桂生; 魏振华; 刘亚辉

    2011-01-01

    为提高光照恒定情况下视觉系统中PTZ调节的主动性和稳定性,提出恒定光照下基于LFPL的APTZ调节方法.采用基于局部粒子滤波的目标预定位方法,对运动目标实现自动估计的标定,提高系统调节的主动性,解决非线性跟踪问题.动态选取光照不变特征滤波粒子克服了光照变化和噪声等因素对目标预定位方法的影响,增强视觉系统的鲁棒性.对水平角和抑角采用Fuzzy控制方法,提高视觉跟踪系统的稳定性.实验结果表明,该方法是正确有效的,使用该系统对变速运动目标的长距离跟踪结果较传统方法更稳定,在光照变化和噪声条件下的运动目标跟踪实验也取得较好的结果.%To improve the initivative and stability of Pan-Tilt-Zoom(PTZ) adjustment in visual system under constant illumination condition,Active Pan-Tilt-Zoom(APTZ) adjustment method based on Local particle Filtering Pre-Location(LFPL) under constant illumination condition is proposed. Target pre-location mehtod based on local particle filter is used to realize automatic estimation calibratiing for moving target. Initivative of PTZ adjusting is improved and nonlinear tracking problem can be settled. Dynamic selecting illumination-inviarants as filtering particles can overcome the affect of illumination varying and noise in target pre-location algorithm and robust of visual system is improved. Fuzzy control method is used to control pan and tilt to improve the stability of visual tracking system. The correctness and accuracy are tested by experiments. Long distance tracking for varying speed moving target using this method shows it is more stabile than the traditional method. Moving target racking under varying illumination and noise conditions shows ideal result.

  17. Dynamic fe Model of Sitting Man Adjustable to Body Height, Body Mass and Posture Used for Calculating Internal Forces in the Lumbar Vertebral Disks

    Science.gov (United States)

    Pankoke, S.; Buck, B.; Woelfel, H. P.

    1998-08-01

    Long-term whole-body vibrations can cause degeneration of the lumbar spine. Therefore existing degeneration has to be assessed as well as industrial working places to prevent further damage. Hence, the mechanical stress in the lumbar spine—especially in the three lower vertebrae—has to be known. This stress can be expressed as internal forces. These internal forces cannot be evaluated experimentally, because force transducers cannot be implementated in the force lines because of ethical reasons. Thus it is necessary to calculate the internal forces with a dynamic mathematical model of sitting man.A two dimensional dynamic Finite Element model of sitting man is presented which allows calculation of these unknown internal forces. The model is based on an anatomic representation of the lower lumbar spine (L3-L5). This lumber spine model is incorporated into a dynamic model of the upper torso with neck, head and arms as well as a model of the body caudal to the lumbar spine with pelvis and legs. Additionally a simple dynamic representation of the viscera is used. All these parts are modelled as rigid bodies connected by linear stiffnesses. Energy dissipation is modelled by assigning modal damping ratio to the calculated undamped eigenvalues. Geometry and inertial properties of the model are determined according to human anatomy. Stiffnesses of the spine model are derived from static in-vitro experiments in references [1] and [2]. Remaining stiffness parameters and parameters for energy dissipation are determined by using parameter identification to fit measurements in reference [3]. The model, which is available in 3 different postures, allows one to adjust its parameters for body height and body mass to the values of the person for which internal forces have to be calculated.

  18. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  19. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  20. Glacial isostatic adjustment at the Laurentide ice sheet margin: Models and observations in the Great Lakes region

    Science.gov (United States)

    Braun, Alexander; Kuo, Chung-Yen; Shum, C. K.; Wu, Patrick; van der Wal, Wouter; Fotopoulos, Georgia

    2008-10-01

    Glacial Isostatic Adjustment (GIA) modelling in North America relies on relative sea level information which is primarily obtained from areas far away from the uplift region. The lack of accurate geodetic observations in the Great Lakes region, which is located in the transition zone between uplift and subsidence due to the deglaciation of the Laurentide ice sheet, has prevented more detailed studies of this former margin of the ice sheet. Recently, observations of vertical crustal motion from improved GPS network solutions and combined tide gauge and satellite altimetry solutions have become available. This study compares these vertical motion observations with predictions obtained from 70 different GIA models. The ice sheet margin is distinct from the centre and far field of the uplift because the sensitivity of the GIA process towards Earth parameters such as mantle viscosity is very different. Specifically, the margin area is most sensitive to the uppermost mantle viscosity and allows for better constraints of this parameter. The 70 GIA models compared herein have different ice loading histories (ICE-3/4/5G) and Earth parameters including lateral heterogeneities. The root-mean-square differences between the 6 best models and the two sets of observations (tide gauge/altimetry and GPS) are 0.66 and 1.57 mm/yr, respectively. Both sets of independent observations are highly correlated and show a very similar fit to the models, which indicates their consistent quality. Therefore, both data sets can be considered as a means for constraining and assessing the quality of GIA models in the Great Lakes region and the former margin of the Laurentide ice sheet.

  1. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  2. Construction project investment control model based on instant information

    Institute of Scientific and Technical Information of China (English)

    WANG Xue-tong

    2006-01-01

    Change of construction conditions always influences project investment by causing the loss of construction work time and extending the duration. To resolve such problem as difficult dynamic control in work construction plan, this article presents a concept of instant optimization by ways of adjustment operation time of each working procedure to minimize investment change. Based on this concept, its mathematical model is established and a strict mathematical justification is performed. An instant optimization model takes advantage of instant information in the construction process to duly complete adjustment of construction; thus we maximize cost efficiency of project investment.

  3. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  4. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  5. Frequency response function-based model updating using Kriging model

    Science.gov (United States)

    Wang, J. T.; Wang, C. J.; Zhao, J. P.

    2017-03-01

    An acceleration frequency response function (FRF) based model updating method is presented in this paper, which introduces Kriging model as metamodel into the optimization process instead of iterating the finite element analysis directly. The Kriging model is taken as a fast running model that can reduce solving time and facilitate the application of intelligent algorithms in model updating. The training samples for Kriging model are generated by the design of experiment (DOE), whose response corresponds to the difference between experimental acceleration FRFs and its counterpart of finite element model (FEM) at selected frequency points. The boundary condition is taken into account, and a two-step DOE method is proposed for reducing the number of training samples. The first step is to select the design variables from the boundary condition, and the selected variables will be passed to the second step for generating the training samples. The optimization results of the design variables are taken as the updated values of the design variables to calibrate the FEM, and then the analytical FRFs tend to coincide with the experimental FRFs. The proposed method is performed successfully on a composite structure of honeycomb sandwich beam, after model updating, the analytical acceleration FRFs have a significant improvement to match the experimental data especially when the damping ratios are adjusted.

  6. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  7. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  8. Sliding Mode Robustness Control Strategy for Shearer Height Adjusting System

    Directory of Open Access Journals (Sweden)

    Xiuping Su

    2013-09-01

    Full Text Available This paper firstly established mathematical model of height adjusting hydro cylinder of the shearer, as well as the state space equation of the shearer height adjusting system. Secondly we designed a shearer automatic height adjusting controller adopting the sliding mode robustness control strategy. The height adjusting controller includes the sliding mode surface switching function based on Ackermann formula, as well as sliding mode control function with the improved butterworth filter. Then simulation of the height adjustment controller shows that the sliding mode robustness control solves buffeting of typical controller, and achieves automatic control for the rolling drum of the shearer.

  9. Hard and Soft Adjusting of a Parameter With Its Known Boundaries by the Value Based on the Experts’ Estimations Limited to the Parameter

    Directory of Open Access Journals (Sweden)

    Romanuke Vadim V.

    2016-07-01

    Full Text Available Adjustment of an unknown parameter of the multistage expert procedure is considered. The lower and upper boundaries of the parameter are counted to be known. A key condition showing that experts’ estimations are satisfactory in the current procedure is an inequality, in which the value based on the estimations is not greater than the parameter. The algorithms of hard and soft adjusting are developed. If the inequality is true and its both terms are too close for a long sequence of expert procedures, the adjusting can be early stopped. The algorithms are reversible, implying inversion to the reverse inequality and sliding up off the lower boundary.

  10. Exploration of Loggerhead Shrike Habitats in Grassland National Park of Canada Based on in Situ Measurements and Satellite-Derived Adjusted Transformed Soil-Adjusted Vegetation Index (ATSAVI

    Directory of Open Access Journals (Sweden)

    Li Shen

    2013-01-01

    Full Text Available The population of loggerhead shrike (Lanius ludovicianus excubutirudes in Grassland National Park of Canada (GNPC has undergone a severe decline due to habitat loss and limitation. Shrike habitat availability is highly impacted by the biophysical characteristics of grassland landscapes. This study was conducted in the west block of GNPC. The overall purpose was to extract important biophysical and topographical variables from both SPOT satellite imagery and in situ measurements. Statistical analysis including Analysis of Variance (ANOVA, measuring Coefficient Variation (CV, and regression analysis were applied to these variables obtained from both imagery and in situ measurement. Vegetation spatial variation and heterogeneity among active, inactive and control nesting sites at 20 m × 20 m, 60 m × 60 m and 100 m × 100 m scales were investigated. Results indicated that shrikes prefer to nest in open areas with scattered shrubs, particularly thick or thorny species of smaller size, to discourage mammalian predators. The most important topographical characteristic is that active sites are located far away from roads at higher elevation. Vegetation index was identified as a good indicator of vegetation characteristics for shrike habitats due to its significant relation to most relevant biophysical factors. Spatial variation analysis showed that at all spatial scales, active sites have the lowest vegetation abundance and the highest heterogeneity among the three types of nesting sites. For all shrike habitat types, vegetation abundance decreases with increasing spatial scales while habitat heterogeneity increases with increasing spatial scales. This research also indicated that suitable shrike habitat for GNPC can be mapped using a logistical model with ATSAVI and dead material in shrub canopy as the independent variables.

  11. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  12. The network adjustment aimed for the campaigned gravity survey using a Bayesian approach: methodology and model test

    Science.gov (United States)

    Chen, Shi; Liao, Xu; Ma, Hongsheng; Zhou, Longquan; Wang, Xingzhou; Zhuang, Jiancang

    2017-04-01

    The relative gravimeter, which generally uses zero-length springs as the gravity senor, is still as the first choice in the field of terrestrial gravity measurement because of its efficiency and low-cost. Because the drift rate of instrument can be changed with the time and meter, it is necessary for estimating the drift rate to back to the base or known gravity value stations for repeated measurement at regular hour's interval during the practical survey. However, the campaigned gravity survey for the large-scale region, which the distance of stations is far away from serval or tens kilometers, the frequent back to close measurement will highly reduce the gravity survey efficiency and extremely time-consuming. In this paper, we proposed a new gravity data adjustment method for estimating the meter drift by means of Bayesian statistical interference. In our approach, we assumed the change of drift rate is a smooth function depend on the time-lapse. The trade-off parameters were be used to control the fitting residuals. We employed the Akaike's Bayesian Information Criterion (ABIC) for the estimated these trade-off parameters. The comparison and analysis of simulated data between the classical and Bayesian adjustment show that our method is robust and has self-adaptive ability for facing to the unregularly non-linear meter drift. At last, we used this novel approach to process the realistic campaigned gravity data at the North China. Our adjustment method is suitable to recover the time-varied drift rate function of each meter, and also to detect the meter abnormal drift during the gravity survey. We also defined an alternative error estimation for the inversed gravity value at the each station on the basis of the marginal distribution theory. Acknowledgment: This research is supported by Science Foundation Institute of Geophysics, CEA from the Ministry of Science and Technology of China (Nos. DQJB16A05; DQJB16B07), China National Special Fund for Earthquake

  13. Hard and Soft Adjusting of a Parameter With Its Known Boundaries by the Value Based on the Experts’ Estimations Limited to the Parameter

    OpenAIRE

    Romanuke Vadim V.

    2016-01-01

    Adjustment of an unknown parameter of the multistage expert procedure is considered. The lower and upper boundaries of the parameter are counted to be known. A key condition showing that experts’ estimations are satisfactory in the current procedure is an inequality, in which the value based on the estimations is not greater than the parameter. The algorithms of hard and soft adjusting are developed. If the inequality is true and its both terms are too close for a long sequence of expert proc...

  14. An investigation of the impact of young children's self-knowledge of trustworthiness on school adjustment: a test of the realistic self-knowledge and positive illusion models.

    Science.gov (United States)

    Betts, Lucy R; Rotenberg, Ken J; Trueman, Mark

    2009-06-01

    The study aimed to examine the relationship between self-knowledge of trustworthiness and young children's school adjustment. One hundred and seventy-three (84 male and 89 female) children from school years 1 and 2 in the United Kingdom (mean age 6 years 2 months) were tested twice over 1-year. Children's trustworthiness was assessed using: (a) self-report at Time 1 and Time 2; (b) peers reports at Time 1 and Time 2; and (c) teacher-reports at Time 2. School adjustment was assessed by child-rated school-liking and the Short-Form Teacher Rating Scale of School Adjustment (Short-Form TRSSA). Longitudinal quadratic relationships were found between school adjustment and children's self-knowledge, using peer-reported trustworthiness as a reference: more accurate self-knowledge of trustworthiness predicted increases in school adjustment. Comparable concurrent quadratic relationships were found between teacher-rated school adjustment and children's self-knowledge, using teacher-reported trustworthiness as a reference, at Time 2. The findings support the conclusion that young children's psychosocial adjustment is best accounted for by the realistic self-knowledge model (Colvin & Block, 1994).

  15. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  16. Model-based consensus

    NARCIS (Netherlands)

    M. Boumans

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  17. Applying the Transactional Stress and Coping Model to Sickle Cell Disorder and Insulin-Dependent Diabetes Mellitus: Identifying Psychosocial Variables Related to Adjustment and Intervention

    Science.gov (United States)

    Hocking, Matthew C.; Lochman, John E.

    2005-01-01

    This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…

  18. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  19. Efficient Model-Based Exploration

    NARCIS (Netherlands)

    Wiering, M.A.; Schmidhuber, J.

    1998-01-01

    Model-Based Reinforcement Learning (MBRL) can greatly profit from using world models for estimating the consequences of selecting particular actions: an animat can construct such a model from its experiences and use it for computing rewarding behavior. We study the problem of collecting useful exper

  20. A class frequency mixture model that adjusts for site-specific amino acid frequencies and improves inference of protein phylogeny

    Directory of Open Access Journals (Sweden)

    Li Karen

    2008-12-01

    Full Text Available Abstract Background Widely used substitution models for proteins, such as the Jones-Taylor-Thornton (JTT or Whelan and Goldman (WAG models, are based on empirical amino acid interchange matrices estimated from databases of protein alignments that incorporate the average amino acid frequencies of the data set under examination (e.g JTT + F. Variation in the evolutionary process between sites is typically modelled by a rates-across-sites distribution such as the gamma (Γ distribution. However, sites in proteins also vary in the kinds of amino acid interchanges that are favoured, a feature that is ignored by standard empirical substitution matrices. Here we examine the degree to which the pattern of evolution at sites differs from that expected based on empirical amino acid substitution models and evaluate the impact of these deviations on phylogenetic estimation. Results We analyzed 21 large protein alignments with two statistical tests designed to detect deviation of site-specific amino acid distributions from data simulated under the standard empirical substitution model: JTT+ F + Γ. We found that the number of states at a given site is, on average, smaller and the frequencies of these states are less uniform than expected based on a JTT + F + Γ substitution model. With a four-taxon example, we show that phylogenetic estimation under the JTT + F + Γ model is seriously biased by a long-branch attraction artefact if the data are simulated under a model utilizing the observed site-specific amino acid frequencies from an alignment. Principal components analyses indicate the existence of at least four major site-specific frequency classes in these 21 protein alignments. Using a mixture model with these four separate classes of site-specific state frequencies plus a fifth class of global frequencies (the JTT + cF + Γ model, significant improvements in model fit for real data sets can be achieved. This simple mixture model also reduces the long

  1. A Measurement Study of BLE iBeacon and Geometric Adjustment Scheme for Indoor Location-Based Mobile Applications

    Directory of Open Access Journals (Sweden)

    Jeongyeup Paek

    2016-01-01

    Full Text Available Bluetooth Low Energy (BLE and the iBeacons have recently gained large interest for enabling various proximity-based application services. Given the ubiquitously deployed nature of Bluetooth devices including mobile smartphones, using BLE and iBeacon technologies seemed to be a promising future to come. This work started off with the belief that this was true: iBeacons could provide us with the accuracy in proximity and distance estimation to enable and simplify the development of many previously difficult applications. However, our empirical studies with three different iBeacon devices from various vendors and two types of smartphone platforms prove that this is not the case. Signal strength readings vary significantly over different iBeacon vendors, mobile platforms, environmental or deployment factors, and usage scenarios. This variability in signal strength naturally complicates the process of extracting an accurate location/proximity estimation in real environments. Our lessons on the limitations of iBeacon technique lead us to design a simple class attendance checking application by performing a simple form of geometric adjustments to compensate for the natural variations in beacon signal strength readings. We believe that the negative observations made in this work can provide future researchers with a reference on how well of a performance to expect from iBeacon devices as they enter their system design phases.

  2. Natural macromolecule based carboxymethyl cellulose as a gel polymer electrolyte with adjustable porosity for lithium ion batteries

    Science.gov (United States)

    Zhu, Y. S.; Xiao, S. Y.; Li, M. X.; Chang, Z.; Wang, F. X.; Gao, J.; Wu, Y. P.

    2015-08-01

    A porous membrane of carboxymethyl cellulose (CMC) from natural macromolecule as a host of a gel polymer electrolyte for lithium ion batteries is reported. It is prepared, for the first time, by a simple non-solvent evaporation method and its porous structure is fine-adjusted by varying the composition ratio of the solvent and non-solvent mixture. The electrolyte uptake of the porous membrane based on CMC is 75.9%. The ionic conductivity of the as-prepared gel membrane saturated with 1 mol L-1 LiPF6 electrolyte at room temperature can be up to 0.48 mS cm-1. Moreover, the lithium ion transference in the gel membrane at room temperature is as high as 0.46, much higher than 0.27 for the commercial separator Celgard 2730. When evaluated by using LiFePO4 cathode, the prepared gel membrane exhibits very good electrochemical performance including higher reversible capacity, better rate capability and good cycling behaviour. The obtained results suggest that this porous polymer membrane shows great attraction to the lithium ion batteries requiring high safety, low cost and environmental friendliness.

  3. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    Science.gov (United States)

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  4. Scenario analysis of carbon emissions' anti-driving effect on Qingdao's energy structure adjustment with an optimization model, Part II: Energy system planning and management.

    Science.gov (United States)

    Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G

    2017-03-01

    In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. 面向Web零件库的可拓关联搜索%Adjustable relevance search in Web-based parts library

    Institute of Scientific and Technical Information of China (English)

    顾复; 张树有

    2011-01-01

    To solve the data heterogeneity and information immensity problems in the web-based parts libraries, Adjustable Relevance Search (ARS) oriented to web-based parts library was put forward. Resource Description Framework Schema(RDFS) was used to construct ontology model for parts' information resources as nodes. Nodes were connected with each other by various semantic relationships to form semantic-Web so as to realize extended relevance search. Based on this semantic-Web model, the working process and algorithm of ARS were put forward and explained. The feasibility and practicality of ARS in Web-based parts library based on semantic-network was demonstrated by a simple programming example designed for injection molding machine.%针对由于Web零件库信息量大、数据的异构性强而出现的问题,提出面向Web零件库的可拓关联搜索.用资源描述框架主题建立零部件信息资源的本体模型,并作为语义网络中的节点;通过语义关系将各节点连接成一个语义网络,进而实现扩展关联搜索.基于该语义网络模型,提出了可拓关联搜索的流程与算法.通过针对注塑机零部件的Web零件库编程实例,验证了可拓关联搜索在Web零件库中的可行性与实用性.

  6. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  7. Optimism, pessimism, and positive and negative affectivity in middle-aged adults: a test of a cognitive-affective model of psychological adjustment.

    Science.gov (United States)

    Chang, E C; Sanna, L J

    2001-09-01

    This study attempted to address limitations in the understanding of optimism and pessimism among middle-aged adults. Specifically, a model of affectivity as a mediator of the link between outcome expectancies and psychological adjustment (life satisfaction and depressive symptoms) was presented and examined in a sample of 237 middle-aged adults. Consistent with a mediation model, results of path analyses indicated that optimism and pessimism (particularly the former) had significant direct and indirect links (by means of positive and negative affectivity) with depressive symptoms and life satisfaction. These results add to the small but growing literature identifying optimism and pessimism as important concomitants of psychological adjustment in more mature adults.

  8. Concept Tree Based Information Retrieval Model

    Directory of Open Access Journals (Sweden)

    Chunyan Yuan

    2014-05-01

    Full Text Available This paper proposes a novel concept-based query expansion technique named Markov concept tree model (MCTM, discovering term relationship through the concept tree deduced by term markov network. We address two important issues for query expansion: the selection and the weighting of expansion search terms. In contrast to earlier methods, queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to a signal query terms. Utilizing Markov network which is constructed according to the co-occurrence information of the terms in collection, it generate concept tree for each original query term, remove the redundant and irrelevant nodes in concept tree, then adjust the weight of original query and the weight of expansion term based on a pruning algorithm. We use this model for query expansion and evaluate the effectiveness of the model by examining the accuracy and robustness of the expansion methods, Compared with the baseline model, the experiments on standard dataset reveal that this method can achieve a better query quality

  9. Gradient-based adaptation of continuous dynamic model structures

    Science.gov (United States)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  10. The national burden of cerebrovascular diseases in Spain: a population-based study using disability-adjusted life years.

    Science.gov (United States)

    Catalá-López, Ferrán; Fernández de Larrea-Baz, Nerea; Morant-Ginestar, Consuelo; Álvarez-Martín, Elena; Díaz-Guzmán, Jaime; Gènova-Maleras, Ricard

    2015-04-20

    The aim of the present study was to determine the national burden of cerebrovascular diseases in the adult population of Spain. Cross-sectional, descriptive population-based study. We calculated the disability-adjusted life years (DALY) metric using country-specific data from national statistics and epidemiological studies to obtain representative outcomes for the Spanish population. DALYs were divided into years of life lost due to premature mortality (YLLs) and years of life lived with disability (YLDs). DALYs were estimated for the year 2008 by applying demographic structure by sex and age-groups, cause-specific mortality, morbidity data and new disability weights proposed in the recent Global Burden of Disease study. In the base case, neither YLLs nor YLDs were discounted or age-weighted. Uncertainty around DALYs was tested using sensitivity analyses. In Spain, cerebrovascular diseases generated 418,052 DALYs, comprising 337,000 (80.6%) YLLs and 81,052 (19.4%) YLDs. This accounts for 1,113 DALYs per 100,000 population (men: 1,197 and women: 1,033) and 3,912 per 100,000 in those over the age of 65 years (men: 4,427 and women: 2,033). Depending on the standard life table and choice of social values used for calculation, total DALYs varied by 15.3% and 59.9% below the main estimate. Estimates provided here represent a comprehensive analysis of the burden of cerebrovascular diseases at a national level. Prevention and control programmes aimed at reducing the disease burden merit further priority in Spain. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  11. Genetic programming-based chaotic time series modeling

    Institute of Scientific and Technical Information of China (English)

    张伟; 吴智铭; 杨根科

    2004-01-01

    This paper proposes a Genetic Programming-Based Modeling(GPM)algorithm on chaotic time series. GP is used here to search for appropriate model structures in function space,and the Particle Swarm Optimization(PSO)algorithm is used for Nonlinear Parameter Estimation(NPE)of dynamic model structures. In addition,GPM integrates the results of Nonlinear Time Series Analysis(NTSA)to adjust the parameters and takes them as the criteria of established models.Experiments showed the effectiveness of such improvements on chaotic time series modeling.

  12. Empirically Based, Agent-based models

    Directory of Open Access Journals (Sweden)

    Elinor Ostrom

    2006-12-01

    Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.

  13. First-Year Village: Experimenting with an African Model for First-Year Adjustment and Support in South Africa

    Science.gov (United States)

    Speckman, McGlory

    2016-01-01

    Predicated on the principles of success and contextuality, this chapter shares an African perspective on a first-year adjustment programme, known as First-Year Village, including its potential and challenges in establishing it.

  14. Holocene sea-level changes along the North Carolina Coastline and their implications for glacial isostatic adjustment models

    Science.gov (United States)

    Horton, B.P.; Peltier, W.R.; Culver, S.J.; Drummond, R.; Engelhart, S.E.; Kemp, A.C.; Mallinson, D.; Thieler, E.R.; Riggs, S.R.; Ames, D.V.; Thomson, K.H.

    2009-01-01

    We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ?? 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ?? 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ?? 0.03 mm year-1 and 0.82 ?? 0.02 mm year-1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL

  15. Weather Radar Adjustment Using Runoff from Urban Surfaces

    DEFF Research Database (Denmark)

    Ahm, Malte; Rasmussen, Michael Robdrup

    2017-01-01

    Weather radar data used for urban drainage applications are traditionally adjusted to point ground references, e.g., rain gauges. However, the available rain gauge density for the adjustment is often low, which may lead to significant representativeness errors. Yet, in many urban catchments......, rainfall is often measured indirectly through runoff sensors. This paper presents a method for weather radar adjustment on the basis of runoff observations (Z-Q adjustment) as an alternative to the traditional Z-R adjustment on the basis of rain gauges. Data from a new monitoring station in Aalborg......, Denmark, were used to evaluate the flow-based weather radar adjustment method against the traditional rain-gauge adjustment. The evaluation was performed by comparing radar-modeled runoff to observed runoff. The methodology was both tested on an events basis and multiple events combined. The results...

  16. Parental Perceptions of Family Adjustment in Childhood Developmental Disabilities

    Science.gov (United States)

    Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry

    2013-01-01

    Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…

  17. Parental Perceptions of Family Adjustment in Childhood Developmental Disabilities

    Science.gov (United States)

    Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry

    2013-01-01

    Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…

  18. Base Flow Model Validation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  19. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  20. The Impact of Paradoxical Comorbidities on Risk-Adjusted...

    Data.gov (United States)

    U.S. Department of Health & Human Services — Persistent uncertainty remains regarding assessments of patient comorbidity based on administrative data for mortality risk adjustment. Some models include comorbid...

  1. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Science.gov (United States)

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  2. Adjusting Growth

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    How does China's approach to economic growth differ from that of the United States? In the context of economic globalization, how can China and the United States establish win-win relations? Focusing on these questions, People's Daily Online Washington-based correspondent Yong Tang recently interviewed Paul A Samuelson, professor of economics at the Massachusetts Institute of Technology, and Robert Mundell, professor of international economics at Columbia University, both of whom are Nobel Prize Laureate...

  3. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  4. An Improved Algorithm of LRFU-CLRFU Based on CAR Dynamic Adjustment%基于CAR动态调整的改进LRFU算法--CLRFU

    Institute of Scientific and Technical Information of China (English)

    王小林; 还璋武

    2016-01-01

    目前,已有LRFU( Least Recently Frequently Used)方法结合了访问时间和访问次数来优化缓存,但却无法适用于操作系统、存储系统、web应用等复杂场景。为了解决LRFU算法中无法动态调整λ以及现有自适应调整算法无法兼顾多种访问模式的问题,本文提出了一种基于CAR ( Clock with Adaptive Replacement)动态调整策略的改进LRFU算法———CLRFU,并将该算法与局部性定量分析模型相结合,能够在不同访问模式下动态调整λ。实验结果表明,CLRFU算法在线性、概率和强局部访问模式下都具有较好的适应性,提高了缓存整体命中率。%At present,existing LRFU ( Least Recently Frequently Used) method is a combination of access time and the number of ac-cess to optimize cache, but it will not apply to operating systems, storage systems, web applications and other complex scenes. In order to solve the LRFU algorithm in dynamic adjustmentλand the existing adaptive algorithm can't combine multiple access mode, this paper proposes a improved LRFU algorithm-CLRFU which is based on the CAR ( Clock with the Adaptive Replacement) , with local quantita-tive analysis model, the combination of dynamic adjustment λ to different access mode. The experimental results show that the CLRFU algorithm has good adaptability in linear, probability and the strong local access mode and improve the cache hit ratio as a whole.

  5. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related activit...

  6. Energetics of geostrophic adjustment in rotating flow

    Science.gov (United States)

    Juan, Fang; Rongsheng, Wu

    2002-09-01

    Energetics of geostrophic adjustment in rotating flow is examined in detail with a linear shallow water model. The initial unbalanced flow considered first falls tinder two classes. The first is similar to that adopted by Gill and is here referred to as a mass imbalance model, for the flow is initially motionless but with a sea surface displacement. The other is the same as that considered by Rossby and is referred to as a momentum imbalance model since there is only a velocity perturbation in the initial field. The significant feature of the energetics of geostrophic adjustment for the above two extreme models is that although the energy conversion ratio has a large case-to-case variability for different initial conditions, its value is bounded below by 0 and above by 1 / 2. Based on the discussion of the above extreme models, the energetics of adjustment for an arbitrary initial condition is investigated. It is found that the characteristics of the energetics of geostrophic adjustment mentioned above are also applicable to adjustment of the general unbalanced flow under the condition that the energy conversion ratio is redefined as the conversion ratio between the change of kinetic energy and potential energy of the deviational fields.

  7. Cost Effectiveness of Childhood Cochlear Implantation and Deaf Education in Nicaragua: A Disability Adjusted Life Year Model.

    Science.gov (United States)

    Saunders, James E; Barrs, David M; Gong, Wenfeng; Wilson, Blake S; Mojica, Karen; Tucci, Debara L

    2015-09-01

    Cochlear implantation (CI) is a common intervention for severe-to-profound hearing loss in high-income countries, but is not commonly available to children in low resource environments. Owing in part to the device costs, CI has been assumed to be less economical than deaf education for low resource countries. The purpose of this study is to compare the cost effectiveness of the two interventions for children with severe-to-profound sensorineural hearing loss (SNHL) in a model using disability adjusted life years (DALYs). Cost estimates were derived from published data, expert opinion, and known costs of services in Nicaragua. Individual costs and lifetime DALY estimates with a 3% discounting rate were applied to both two interventions. Sensitivity analysis was implemented to evaluate the effect on the discounted cost of five key components: implant cost, audiology salary, speech therapy salary, number of children implanted per year, and device failure probability. The costs per DALY averted are $5,898 and $5,529 for CI and deaf education, respectively. Using standards set by the WHO, both interventions are cost effective. Sensitivity analysis shows that when all costs set to maximum estimates, CI is still cost effective. Using a conservative DALY analysis, both CI and deaf education are cost-effective treatment alternatives for severe-to-profound SNHL. CI intervention costs are not only influenced by the initial surgery and device costs but also by rehabilitation costs and the lifetime maintenance, device replacement, and battery costs. The major CI cost differences in this low resource setting were increased initial training and infrastructure costs, but lower medical personnel and surgery costs.

  8. 基于UML的高校调串课系统的建模研究%UML modeling on university teacher adjustable course system

    Institute of Scientific and Technical Information of China (English)

    阎琦

    2013-01-01

    The university teacher adjustable course system is the network application software used in teacher' s adjustable course for schools of higher education. In the course of demand analysis, the whole system is divided into six parts, such as teacher adjustable course module, teaching secretary module, academic administration adjustable course module, course information module,etc. The use of language UML can realize the object-oriented analysis and modeling, completing the system static modeling and dynamic modeling. In database design, E-R diagram establishes the database concept model.%高校调串课系统是适用于高等院校教师调串课的网络应用软件.在需求分析过程中,将整个系统分为教师调串课模块、教学秘书模块、教务处调串课管理模块和课程信息模块等6部分,使用统一建模语言UML对系统进行面向对象的分析和建模,完成了系统的静态建模和动态建模.在数据库设计中用E-R图建立了数据库的概念模型.

  9. BREATH: Web-Based Self-Management for Psychological Adjustment After Primary Breast Cancer--Results of a Multicenter Randomized Controlled Trial

    NARCIS (Netherlands)

    Berg, S.W. van den; Gielissen, M.F.M.; Custers, J.A.E.; Graaf, W.T.A. van der; Ottevanger, P.B.; Prins, J.B.

    2015-01-01

    PURPOSE: Early breast cancer survivors (BCSs) report high unmet care needs, and easily accessible care is not routinely available for this growing population. The Breast Cancer E-Health (BREATH) trial is a Web-based self-management intervention to support the psychological adjustment of women after

  10. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  11. 不同调整路径下资本结构的调整速度比较分析——来自中国上市公司的经验证据%Comparative Research on Capital Structure Adjustment Speeds of Different Adjustment Approaches-An Empirical Study Based on Data from Chinese Listed Companies

    Institute of Scientific and Technical Information of China (English)

    白明; 任若恩

    2011-01-01

    Based on partial adjustment model, this paper identifies the method of calculating capital structure adjustment speeds of different adjustment approaches, and compares the speeds of different adjustment approaches of Chinese listed companies, including commercial credit , short-term debt , long-term debt , stock and internal retain. The main findings are as follows. External fund is the main supply for Chinese listed companies, which seems to be the reverse of Pecking Order Theory. The most effective approach is stock and the least effective one is long-term debt. The distribution of the five adjustment speeds in different industries is different due to the different operation and ownership characteristics of the leading enterprises in different industries. The companies with financial surplus mainly depend on the stock adjustment approach while the ones with financial deficit prefer debt adjustment approach.%基于部分调整模型提出了不同调整路径下资本结构调整速度的计算方法,并且从全样本、分行业以及分现金状况三个方面,测算和比较了我国上市公司通过商业信用、短期负债、长期负债、股票以及内部留存五条路径调整资本结构的调整速度.研究发现:我国上市公司主要依靠外源资金调整资本结构,最有效的外源调整路径是股票,效率最低的路径是长期负债,调整路径的选择顺序与融资优序理论相悖;行业的经营特点和行业中龙头企业的所有制性质影响各路径调整速度的分布,各行业路径调整速度的分布存在明显差异;现金充裕的企业主要依靠调整股票路径,而现金短缺的企业则主要依靠债务调整路径.

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. A software tool for estimation of burden of infectious diseases in Europe using incidence-based disability adjusted life years

    NARCIS (Netherlands)

    Colzani, E. (Edoardo); A. Cassini (Alessandro); D. Lewandowski (Daniel); M.J.J. Mangen; Plass, D. (Dietrich); S.A. McDonald (Scott); R.A.W. Van Lier (Rene A. W.); J.A. Haagsma (Juanita); Maringhini, G. (Guido); Pini, A. (Alessandro); P Kramarz (Piotr); M.E.E. Kretzschmar (Mirjam)

    2017-01-01

    textabstractThe burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this met

  14. A Software Tool for Estimation of Burden of Infectious Diseases in Europe Using Incidence-Based Disability Adjusted Life Years

    NARCIS (Netherlands)

    Colzani, Edoardo; Cassini, Alessandro; Lewandowski, Daniel; Mangen, Marie-Josee J; Plass, Dietrich; McDonald, Scott A; van Lier, Alies; Haagsma, Juanita A; Maringhini, Guido; Pini, Alessandro; Kramarz, Piotr; Kretzschmar, Mirjam EE

    2017-01-01

    The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability-Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this methodology pose

  15. A software tool for estimation of burden of infectious diseases in Europe using incidence-based disability adjusted life years

    NARCIS (Netherlands)

    Colzani, E. (Edoardo); A. Cassini (Alessandro); D. Lewandowski (Daniel); M.J.J. Mangen; Plass, D. (Dietrich); S.A. McDonald (Scott); R.A.W. Van Lier (Rene A. W.); J.A. Haagsma (Juanita); Maringhini, G. (Guido); Pini, A. (Alessandro); P Kramarz (Piotr); M.E.E. Kretzschmar (Mirjam)

    2017-01-01

    textabstractThe burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this

  16. A software tool for estimation of burden of infectious diseases in Europe using incidence-based disability adjusted life years

    NARCIS (Netherlands)

    Colzani, Edoardo; Cassini, Alessandro; Lewandowski, Daniel; Mangen, Marie Josee J.|info:eu-repo/dai/nl/217293964; Plass, Dietrich; McDonald, Scott A; van Lier, Alies; Haagsma, Juanita A.; Maringhini, Guido; Pini, Alessandro; Kramarz, Piotr; Kretzschmar, Mirjam E.|info:eu-repo/dai/nl/075187981

    2017-01-01

    The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability- Adjusted Life Years (DALYs). However, calculating, interpreting and communicating the results of studies using this methodology

  17. Couples' patterns of adjustment to colon cancer.

    Science.gov (United States)

    Northouse, L L; Mood, D; Templin, T; Mellon, S; George, T

    2000-01-01

    The objectives for this longitudinal study were to: (a) compare colon cancer patients' and their spouses' appraisal of illness, resources, concurrent stress, and adjustment during the first year following surgery; (b) examine the influence of gender (male vs female) and role (patient vs spouse caregiver) on study variables; (c) assess the degree of correlation between patients' and spouses' adjustments; and (d) identify factors that affect adjustment to the illness. Fifty-six couples were interviewed at one week post diagnosis, and at 60 days and one year post surgery. Based on a cognitive-appraisal model of stress, the Smilkstein Stress Scale was used to measure concurrent stress; the Family APGAR, Social Support Questionnaire, and Dyadic Adjustment Scale were used to measure social resources; the Beck Hopelessness Scale and Mishel Uncertainty in Illness Scales were used to measure appraisal of illness; and the Brief Symptom Inventory and Psychosocial Adjustment to Illness Scale were used to measure psychosocial adjustment. Repeated Measures Analysis of Variance indicated that spouses reported significantly more emotional distress and less social support than patients. Gender differences were found, with women reporting more distress, more role problems, and less marital satisfaction, regardless of whether they were patient or spouse. Both patients and spouses reported decreases in their family functioning and social support, but also decreases in emotional distress over time. Moderately high autocorrelations and modest intercorrelations were found among and between patients' and spouses' adjustment scores over time. The strongest predictors of patients' role adjustment problems were hopelessness and spouses' role problems. The strongest predictors of spouses' role problems were spouses' own baseline role problems and level of marital satisfaction. Interventions need to start early in the course of illness, be family-focused, and identify the couples at risk of

  18. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  19. A error predictive control model of mechanical processing based on double hidden layers & L-M algorithm to adjusting BP neural networks%基于双隐层L-M算法的BP神经网络机械加工误差预测控制模型

    Institute of Scientific and Technical Information of China (English)

    龚立雄; 万勇; 侯智; 黄敏; 姜建华

    2013-01-01

    Analyzed error transmission and formation, and constructed double hidden L-M algorithm BP neural network predictive control model according to multiple input and output. The model is used to predict and control total amount of feed,first and second amount of feed by means of the rigidity of process system,workpiece hardness,before and after processing the radial error. Experimental and simulation shows that the model can guide the production, optimize the machining process, improve products quality. Finally, software system of error predictive control with LABVIEW and MATLAB realize predictive control visualization.%分析了误差的来源和传递方式,针对机械加工过程高度非线性、多输入和多输出的特点,构造了双隐层L-M算法BP神经网络误差预测控制模型.根据工艺系统刚度、工件硬度、加工前、后径向误差来预测控制刀具径向总进刀量、第一、第二次刀具径向进刀量,实验和仿真结果表明该模型能指导生产、优化加工工艺和提高产品质量.最后,采用LAB-VIEW软件和MATLAB软件编制了误差预测控制系统,实现了预测控制的可视化.

  20. A complete generalized adjustment criterion

    NARCIS (Netherlands)

    Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.

    2015-01-01

    Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent confound

  1. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2006-08-01

    Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.

  2. 机车二系弹簧载荷均匀性分配调整的混合建模方法%Hybrid modeling method for adjusting distribution of locomotive secondary spring loads

    Institute of Scientific and Technical Information of China (English)

    韩锟; 潘迪夫

    2012-01-01

    A hybrid modeling method for adjusting the distribution of locomotive secondary spring loads was proposed based on the combination of mechanism modeling method and neural network. In this method, a mechanism model based on the rigid body assumption was established as the master-rule model for spring load adjustment by using mechanical and mathematical methods. An error compensation model was constituted by BP neural network to compensate for the error of the mechanism model. The hybrid model was the parallel connection of the two models above and its output was the summation of their outputs. The results show that the hybrid modeling method can make further improvement on the accuracy and efficiency of spring loads adjustment in solving the continuous multi-dimension space modeling problem. Compared with the mechanism model, the maximum deviation of secondary spring loads is reduced by 8%-15% and the average adjusting time is reduced by more than 25%.%针对机车二系弹簧载荷均匀性分配调整的建模问题,提出综合运用机理建模和神经网络建模的混合建模方法.该方法在刚性车体假定下采用经典力学和数学方法建立机车车体-二系弹簧系统的机理模型,作为调簧主规律模型;用人工神经网络方法建立BP网络误差补偿模型来弥补机理模型的建模误差;二者并联组成混合模型,其输出为机理模型和BP网络模型输出的叠加.研究结果表明:混合建模方法用于二系调簧的多维连续空间系统建模,可大幅提高模型精度;实际调簧过程中使用混合模型可进一步提高调簧精度和效率,使载荷分布最大误差较机理模型减少8%~15%,平均调簧时间缩短25%以上.

  3. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  4. Adjustment computations spatial data analysis

    CERN Document Server

    Ghilani, Charles D

    2011-01-01

    the complete guide to adjusting for measurement error-expanded and updated no measurement is ever exact. Adjustment Computations updates a classic, definitive text on surveying with the latest methodologies and tools for analyzing and adjusting errors with a focus on least squares adjustments, the most rigorous methodology available and the one on which accuracy standards for surveys are based. This extensively updated Fifth Edition shares new information on advances in modern software and GNSS-acquired data. Expanded sections offer a greater amount of computable problems and their worked solu

  5. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    Science.gov (United States)

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.

  6. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Energy Technology Data Exchange (ETDEWEB)

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  7. Digital constant current source based on the adjustable voltage regulator%基于可调稳压芯片的数控恒流源

    Institute of Scientific and Technical Information of China (English)

    苗新法

    2011-01-01

    Briefly discusses the design of DC constant current source, and discusses the existing devices, analyses their disadvantages,mainly introduces an efficient digital controlled constant current source which is based on adjustable voltage regulator. The schematic diagram of the circuit is given. The author introduces the principle of the system and the relation parameter in detail. Kernel model and feedback model are introduced from the point of view of system composition. The principle of the system is using the kernel chip's feedback pin, changing the parallel feedback to serial feedback. This system has function of current preset and low current ripple etc. An instrument developed by the author employs the constant current circuit designed in this paper and has got popular applications.%针对数控直流恒流源的设计进行了探讨,对比已有的恒流源实现方式,分析了其缺点,重点介绍1种高效率的基于可调稳压芯片的数控恒流源.给出了电路原理图,就整机工作原理、相关参数计算做了详细介绍.从系统组成角度分别介绍了核心模块、反馈模块及其实现方法.其基本原理是利用核心芯片的反馈引脚,把并联反馈改为串联反馈,从而实现恒流性能.系统具有输出电流可预置,输出纹波电流低等特性.采用设计的恒流源电路在自行研制的某仪器已推广应用.

  8. Kernel model-based diagnosis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The methods for computing the kemel consistency-based diagnoses and the kernel abductive diagnoses are only suited for the situation where part of the fault behavioral modes of the components are known. The characterization of the kernel model-based diagnosis based on the general causal theory is proposed, which can break through the limitation of the above methods when all behavioral modes of each component are known. Using this method, when observation subsets deduced logically are respectively assigned to the empty or the whole observation set, the kernel consistency-based diagnoses and the kernel abductive diagnoses can deal with all situations. The direct relationship between this diagnostic procedure and the prime implicants/implicates is proved, thus linking theoretical result with implementation.

  9. An Evaluation of the Adjusted DeLone and McLean Model of Information Systems Success; the case of financial information system in Ferdowsi University of Mashhad

    OpenAIRE

    Mohammad Lagzian; Shamsoddin Nazemi; Fatemeh Dadmand

    2012-01-01

    Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS) in an Iranian Univ...

  10. The common sense model of self-regulation and psychological adjustment to predictive genetic testing: a prospective study.

    Science.gov (United States)

    van Oostrom, Iris; Meijers-Heijboer, Hanne; Duivenvoorden, Hugo J; Bröcker-Vriends, Annette H J T; van Asperen, Christi J; Sijmons, Rolf H; Seynaeve, Caroline; Van Gool, Arthur R; Klijn, Jan G M; Tibben, Aad

    2007-12-01

    This prospective study explored the contribution of illness representations and coping to cancer-related distress in unaffected individuals undergoing predictive genetic testing for an identified mutation in BRCA1/2 (BReast CAncer) or an HNPCC (Hereditary Nonpolyposis Colorectal Cancer)-related gene, based on the common sense model of self-regulation. Coping with hereditary cancer (UCL), illness representations (IPQ-R) and risk perception were assessed in 235 unaffected applicants for genetic testing before test result disclosure. Hereditary cancer distress (IES) and cancer worry (CWS) were assessed before, 2 weeks after and 6 months after result disclosure. Timeline (r = 0.30), consequences (r = 0.25), illness coherence (r = 0.21) and risk perception (r = 0.20) were significantly correlated to passive coping. Passive coping predicted hereditary cancer distress and cancer worry from pre-test (beta = 0.46 and 0.42, respectively) up to 6 months after result disclosure (beta = 0.32 and 0.19, respectively). Illness coherence predicted hereditary cancer distress up to 6 months after result disclosure (beta = 0.24), too. The self-regulatory model may be useful to predict the cognitive and emotional reactions to genetic cancer susceptibility testing. Identifying unhelpful representations and cognitive restructuring may be appropriate interventions to help distressed individuals undergoing genetic susceptibility testing for a BRCA1/2 or a HNPCC-related mutation.

  11. Model-based tomographic reconstruction

    Science.gov (United States)

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  12. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I

    Directory of Open Access Journals (Sweden)

    Sit B.

    2009-08-01

    Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.

  13. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  14. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  15. Does being empathic pay off?-Associations between performance-based measures of empathy and social adjustment in younger and older women.

    Science.gov (United States)

    Blanke, Elisabeth S; Rauers, Antje; Riediger, Michaela

    2016-08-01

    Cognitive empathy (the ability to infer another person's thoughts and feelings) and emotional empathy (the ability to emotionally resonate with another person's feelings) have been associated with social adjustment. Traditionally, these skills are assessed with self-report measures. However, these may not adequately reflect people's actual empathic abilities. There is only little and inconsistent empirical evidence on associations between performance-based empathy and positive social adjustment. In the study presented here, we gathered further evidence for such an association. Using a realistic interaction task in which unfamiliar women were paired into dyads and talked about positive and negative events in their lives, we assessed empathic accuracy (an indicator of cognitive empathy) and emotional congruence (an indicator of emotional empathy). Additionally, we obtained 2 indicators of social adjustment: participants' self-rated satisfaction regarding the communication with their partner in the interaction task, and their self-rated satisfaction with social relationships in general. We furthermore explored the role of potential moderators, which may help to explain discrepant past findings. To test for contextual and interindividual differences, we distinguished between positive and negative emotional valence in the empathy task and investigated 2 adult age groups (102 younger women: 20-31 years; 106 older: 69-80 years). For almost all analyses, only empathic skills for positive (not for negative) affect were predictive of social adjustment, and the associations were comparable for younger and older women. These results underline the role of valence in associations between empathic skills and social adjustment across the life span. (PsycINFO Database Record

  16. Burden of typhoid fever in low-income and middle-income countries: a systematic, literature-based update with risk-factor adjustment.

    Science.gov (United States)

    Mogasale, Vittal; Maskery, Brian; Ochiai, R Leon; Lee, Jung Seok; Mogasale, Vijayalaxmi V; Ramani, Enusa; Kim, Young Eun; Park, Jin Kyung; Wierzba, Thomas F

    2014-10-01

    No access to safe water is an important risk factor for typhoid fever, yet risk-level heterogeneity is unaccounted for in previous global burden estimates. Since WHO has recommended risk-based use of typhoid polysaccharide vaccine, we revisited the burden of typhoid fever in low-income and middle-income countries (LMICs) after adjusting for water-related risk. We estimated the typhoid disease burden from studies done in LMICs based on blood-culture-confirmed incidence rates applied to the 2010 population, after correcting for operational issues related to surveillance, limitations of diagnostic tests, and water-related risk. We derived incidence estimates, correction factors, and mortality estimates from systematic literature reviews. We did scenario analyses for risk factors, diagnostic sensitivity, and case fatality rates, accounting for the uncertainty in these estimates and we compared them with previous disease burden estimates. The estimated number of typhoid fever cases in LMICs in 2010 after adjusting for water-related risk was 11·9 million (95% CI 9·9-14·7) cases with 129 000 (75 000-208 000) deaths. By comparison, the estimated risk-unadjusted burden was 20·6 million (17·5-24·2) cases and 223 000 (131 000-344 000) deaths. Scenario analyses indicated that the risk-factor adjustment and updated diagnostic test correction factor derived from systematic literature reviews were the drivers of differences between the current estimate and past estimates. The risk-adjusted typhoid fever burden estimate was more conservative than previous estimates. However, by distinguishing the risk differences, it will allow assessment of the effect at the population level and will facilitate cost-effectiveness calculations for risk-based vaccination strategies for future typhoid conjugate vaccine. Copyright © 2014 Mogasale et al. Open Access article distributed under the terms of CC BY-NC-SA. Published by .. All rights reserved.

  17. Positioning and number of nutritional levels in dose-response trials to estimate the optimal-level and the adjustment of the models

    Directory of Open Access Journals (Sweden)

    Fernando Augusto de Souza

    2014-07-01

    Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.

  18. Normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in a specific ethnicity.

    Science.gov (United States)

    Kor-Anantakul, Ounjai; Suntharasaj, Thitima; Suwanrath, Chitkasaem; Hanprasertpong, Tharangrut; Pranpanus, Savitree; Pruksanusak, Ninlapa; Janwadee, Suthiraporn; Geater, Alan

    2017-01-01

    To establish normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in southern Thai women, and to compare these reference levels with Caucasian-specific and northern Thai models. A cross-sectional study was conducted in 1,150 normal singleton pregnancy women to determine serum pregnancy-associated plasma protein-A (PAPP-A) and free β-human chorionic gonadotropin (β-hCG) concentrations in women from southern Thailand. The predicted median values were compared with published equations for Caucasians and northern Thai women. The best-fitting regression equations for the expected median serum levels of PAPP-A (mIU/L) and free β- hCG (ng/mL) according to maternal weight (Wt in kg) and gestational age (GA in days) were: [Formula: see text] and [Formula: see text] Both equations were selected with a statistically significant contribution (p< 0.05). Compared with the Caucasian model, the median values of PAPP-A were higher and the median values of free β-hCG were lower in the southern Thai women. And compared with the northern Thai models, the median values of both biomarkers were lower in southern Thai women. The study has successfully developed maternal-weight- and gestational-age-adjusted median normative models to convert the PAPP-A and free β-hCG levels into their Multiple of Median equivalents in southern Thai women. These models confirmed ethnic differences.

  19. Adjusting of Wind Input Source Term in WAVEWATCH III Model for the Middle-Sized Water Body on the Basis of the Field Experiment

    Directory of Open Access Journals (Sweden)

    Alexandra Kuznetsova

    2016-01-01

    Full Text Available Adjusting of wind input source term in numerical model WAVEWATCH III for the middle-sized water body is reported. For this purpose, the field experiment on Gorky Reservoir is carried out. Surface waves are measured along with the parameters of the airflow. The measurement of wind speed in close proximity to the water surface is performed. On the basis of the experimental results, the parameterization of the drag coefficient depending on the 10 m wind speed is proposed. This parameterization is used in WAVEWATCH III for the adjusting of the wind input source term within WAM 3 and Tolman and Chalikov parameterizations. The simulation of the surface wind waves within tuned to the conditions of the middle-sized water body WAVEWATCH III is performed using three built-in parameterizations (WAM 3, Tolman and Chalikov, and WAM 4 and adjusted wind input source term parameterizations. Verification of the applicability of the model to the middle-sized reservoir is performed by comparing the simulated data with the results of the field experiment. It is shown that the use of the proposed parameterization CD(U10 improves the agreement in the significant wave height HS from the field experiment and from the numerical simulation.

  20. Reliability adjustment: a necessity for trauma center ranking and benchmarking.

    Science.gov (United States)

    Hashmi, Zain G; Dimick, Justin B; Efron, David T; Haut, Elliott R; Schneider, Eric B; Zafar, Syed Nabeel; Schwartz, Diane; Cornwell, Edward E; Haider, Adil H

    2013-07-01

    Currently, trauma center quality benchmarking is based on risk adjusted observed-expected (O/E) mortality ratios. However, failure to account for number of patients has been recently shown to produce unreliable mortality estimates, especially for low-volume centers. This study explores the effect of reliability adjustment (RA), a statistical technique developed to eliminate bias introduced by low volume on risk-adjusted trauma center benchmarking. Analysis of the National Trauma Data Bank 2010 was performed. Patients 16 years or older with blunt or penetrating trauma and an Injury Severity Score (ISS) of 9 or greater were included. Based on the statistically accepted standards of the Trauma Quality Improvement Program methodology, risk-adjusted mortality rates were generated for each center and used to rank them accordingly. Hierarchical logistic regression modeling was then performed to adjust these rates for reliability using an empiric Bayes approach. The impact of RA was examined by (1) recalculating interfacility variations in adjusted mortality rates and (2) comparing adjusted hospital mortality quintile rankings before and after RA. A total of 557 facilities (with 278,558 patients) were included. RA significantly reduced the variation in risk-adjusted mortality rates between centers from 14-fold (0.7-9.8%) to only 2-fold (4.4-9.6%) after RA. This reduction in variation was most profound for smaller centers. A total of 68 "best" hospitals and 18 "worst" hospitals based on current risk adjustment methods were reclassified after performing RA. "Reliability adjustment" dramatically reduces variations in risk-adjusted mortality arising from statistical noise, especially for lower volume centers. Moreover, the absence of RA had a profound impact on hospital performance assessment, suggesting that nearly one of every six hospitals in National Trauma Data Bank would have been inappropriately placed among the very best or very worst quintile of rankings. RA should be

  1. Initial Pre-stress Finding and Structural Behaviors Analysis of Cable Net Based on Linear Adjustment Theory

    Institute of Scientific and Technical Information of China (English)

    REN Tao; CHEN Wu-jun; FU Gong-yi

    2008-01-01

    The tensile cable-strut structure is a self-equilibrate pre-stressed system. The initial pre-stress calculation is the fundamental structural analysis. A new numerical procedure was developed. The force density method is the cornerstone of analytical formula, and then introduced into linear adjustment theory;the least square least norm solution, the optimized initial pre-stress, is yielded. The initial pre-stress and structural performances of a particular single-layer saddle-shaped cable-net structure were analyzed with the developed method, which is proved to be efficient and correct. The modal analyses were performed with respect to various pre-stress levels. Finally, the structural performances were investigated comprehensively.

  2. The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market

    DEFF Research Database (Denmark)

    Andersen, Signe Hald; Hansen, Lars Gårn

    Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...

  3. Stimulating Scientific Reasoning with Drawing-Based Modeling

    Science.gov (United States)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2017-07-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.

  4. Empirical comparison of maximal voxel and non-isotropic adjusted cluster extent results in a voxel-based morphometry study of comorbid learning disability with schizophrenia.

    Science.gov (United States)

    Moorhead, T William J; Job, Dominic E; Spencer, Michael D; Whalley, Heather C; Johnstone, Eve C; Lawrie, Stephen M

    2005-11-15

    We present an empirical comparison of cluster extent and maximal voxel results in a voxel-based morphometry (VBM) study of brain structure. The cluster extents are adjusted for underlying deviation from uniform smoothness. We implement this comparison on a four-group cohort that has previously shown evidence of a neuro-developmental component in schizophrenia (Moorhead, T.W.J., Job, D.E., Whalley, H.C., Sanderson, T.L., Johnstone, E.C. and Lawrie, S.M. 2004. Voxel-based morphometry of comorbid schizophrenia and learning disability: analyses in normalized and native spaces using parametric and nonparametric statistical methods. NeuroImage 22: 188-202.). We find that adjusted cluster extent results provide information on the nature of deficits that occur in the schizophrenia affected groups, and these important structural differences are not all shown in maximal voxel results. The maximal voxel and cluster extent results are corrected for multiple comparisons using Random Fields (RF) methods. In order to apply the cluster extent measures, we propose a post-hoc method for determining the primary threshold in the analysis. Unadjusted cluster extent results are reported, for these, no allowance is made for non-isotropic smoothness, and comparison with the adjusted extent results shows that the unadjusted results can be either conservative or anti-conservative depending upon the underlying tissue distributions.

  5. The Effects of School-Based Maum Meditation Program on the Self-Esteem and School Adjustment in Primary School Students

    Science.gov (United States)

    Yoo, Yang Gyeong; Lee, In Soo

    2013-01-01

    Self-esteem and school adjustment of children in the lower grades of primary school, the beginning stage of school life, have a close relationship with development of personality, mental health and characters of children. Therefore, the present study aimed to verify the effect of school-based Maum Meditation program on children in the lower grades of primary school, as a personality education program. The result showed that the experimental group with application of Maum Meditation program had significant improvements in self-esteem and school adjustment, compared to the control group without the application. In conclusion, since the study provides significant evidence that the intervention of Maum Meditation program had positive effects on self-esteem and school adjustment of children in the early stage of primary school, it is suggested to actively employ Maum Meditation as a school-based meditation program for mental health promotion of children in the early school ages, the stage of formation of personalities and habits. PMID:23777717

  6. The effects of school-based Maum meditation program on the self-esteem and school adjustment in primary school students.

    Science.gov (United States)

    Yoo, Yang Gyeong; Lee, In Soo

    2013-03-10

    Self-esteem and school adjustment of children in the lower grades of primary school, the beginning stage of school life, have a close relationship with development of personality, mental health and characters of children. Therefore, the present study aimed to verify the effect of school-based Maum Meditation program on children in the lower grades of primary school, as a personality education program. The result showed that the experimental group with application of Maum Meditation program had significant improvements in self-esteem and school adjustment, compared to the control group without the application. In conclusion, since the study provides significant evidence that the intervention of Maum Meditation program had positive effects on self-esteem and school adjustment of children in the early stage of primary school, it is suggested to actively employ Maum Meditation as a school-based meditation program for mental health promotion of children in the early school ages, the stage of formation of personalities and habits.

  7. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  8. An Agent Based Classification Model

    CERN Document Server

    Gu, Feng; Greensmith, Julie

    2009-01-01

    The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to prob- lem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifi cally for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based mod- elling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environ- ...

  9. Bayes linear covariance matrix adjustment

    CERN Document Server

    Wilkinson, Darren J

    1995-01-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be a...

  10. A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle Joint of a Bipedal Robot: Modeling & Simulation

    Directory of Open Access Journals (Sweden)

    Aiman Omer

    2015-12-01

    Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.

  11. Recursive three-dimensional model reconstruction based on Kalman filtering.

    Science.gov (United States)

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-06-01

    A recursive two-step method to recover structure and motion from image sequences based on Kalman filtering is described in this paper. The algorithm consists of two major steps. The first step is an extended Kalman filter (EKF) for the estimation of the object's pose. The second step is a set of EKFs, one for each model point, for the refinement of the positions of the model features in the three-dimensional (3-D) space. These two steps alternate from frame to frame. The initial model converges to the final structure as the image sequence is scanned sequentially. The performance of the algorithm is demonstrated with both synthetic data and real-world objects. Analytical and empirical comparisons are made among our approach, the interleaved bundle adjustment method, and the Kalman filtering-based recursive algorithm by Azarbayejani and Pentland. Our approach outperformed the other two algorithms in terms of computation speed without loss in the quality of model reconstruction.

  12. [Effects in the adherence treatment and psychological adjustment after the disclosure of HIV/AIDS diagnosis with the "DIRE" clinical model in Colombian children under 17].

    Science.gov (United States)

    Trejos, Ana María; Reyes, Lizeth; Bahamon, Marly Johana; Alarcón, Yolima; Gaviria, Gladys

    2015-08-01

    A study in five Colombian cities in 2006, confirms the findings of other international studies: the majority of HIV-positive children not know their diagnosis, caregivers are reluctant to give this information because they believe that the news will cause emotional distress to the child becoming primary purpose of this study to validate a model of revelation. We implemented a clinical model, referred to as: "DIRE" that hypothetically had normalizing effects on psychological adjustment and adherence to antiretroviral treatment of HIV seropositive children, using a quasi-experimental design. Test were administered (questionnaire to assess patterns of disclosure and non-disclosure of the diagnosis of VIH/SIDA on children in health professionals and participants caregivers, Family Apgar, EuroQol EQ- 5D, MOS Social Support Survey Questionnaire Information treatment for VIH/SIDA and child Symptom Checklist CBCL/6-18 adapted to Latinos) before and after implementation of the model to 31 children (n: 31), 30 caregivers (n: 30) and 41 health professionals. Data processing was performed using the Statistical Package for the Social Science version 21 by applying parametric tests (Friedman) and nonparametric (t Student). No significant differences in adherence to treatment (p=0.392), in the psychological adjustment were found positive significant differences at follow-ups compared to baseline 2 weeks (p: 0.001), 3 months (p: 0.000) and 6 months (p: 0.000). The clinical model demonstrated effectiveness in normalizing of psychological adjustment and maintaining treatment compliance. The process also generated confidence in caregivers and health professionals in this difficult task.

  13. The FiR 1 photon beam model adjustment according to in-air spectrum measurements with the Mg(Ar) ionization chamber.

    Science.gov (United States)

    Koivunoro, H; Schmitz, T; Hippeläinen, E; Liu, Y-H; Serén, T; Kotiluoto, P; Auterinen, I; Savolainen, S

    2014-06-01

    The mixed neutron-photon beam of FiR 1 reactor is used for boron-neutron capture therapy (BNCT) in Finland. A beam model has been defined for patient treatment planning and dosimetric calculations. The neutron beam model has been validated with an activation foil measurements. The photon beam model has not been thoroughly validated against measurements, due to the fact that the beam photon dose rate is low, at most only 2% of the total weighted patient dose at FiR 1. However, improvement of the photon dose detection accuracy is worthwhile, since the beam photon dose is of concern in the beam dosimetry. In this study, we have performed ionization chamber measurements with multiple build-up caps of different thickness to adjust the calculated photon spectrum of a FiR 1 beam model.

  14. Theory of Work Adjustment Personality Constructs.

    Science.gov (United States)

    Lawson, Loralie

    1993-01-01

    To measure Theory of Work Adjustment personality and adjustment style dimensions, content-based scales were analyzed for homogeneity and successively reanalyzed for reliability improvement. Three sound scales were developed: inflexibility, activeness, and reactiveness. (SK)

  15. A comparison of two sleep spindle detection methods based on all night averages: individually adjusted versus fixed frequencies

    Directory of Open Access Journals (Sweden)

    Péter Przemyslaw Ujma

    2015-02-01

    Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.

  16. Preparation of Composite Phase Change Material Based on Sol-Gel Method and Its Temperature-Adjustable Textile

    Institute of Scientific and Technical Information of China (English)

    YI Shi-xiong; MA Xiao-guang; ZHANG Ying; LI Hua

    2009-01-01

    In this study, the sol-gel method was introduced to prepare the composite phase change material (CPCM). The CPCM was added to fabric with coating techniques and the thermal activity of modified fabric was studied. In addition, the thermal property and the microstructure of CPCM were also discussed in detail by means of polarization microscope and differential scanning calorimeter, respectively. According to the analysis of main influencial factors of the property of CPCM, the optimal preparing technique was determined. It was proved that CPCM could exhibit a good thermal property while phase transformation process took place, and a better appearance of the fabric modified with CPCM could be obtained due to the fact that in a warm circumstance, the liquid-state phase change material could be firmly enwrapped and embedded in the three-dimensional network all the time during the phase transformation. Besides, the fabric treated with CPCM had a high phase-transition enthalpy and an appropriate phase-transition temperature. As a result, a desirable temperature-adjustable function appeared.

  17. Validation of the internalization of the Model Minority Myth Measure (IM-4) and its link to academic performance and psychological adjustment among Asian American adolescents.

    Science.gov (United States)

    Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy

    2015-04-01

    There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes.

  18. Differences among skeletal muscle mass indices derived from height-, weight-, and body mass index-adjusted models in assessing sarcopenia.

    Science.gov (United States)

    Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo

    2016-07-01

    Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective.

  19. Research on BOM based composable modeling method

    NARCIS (Netherlands)

    Zhang, M.; He, Q.; Gong, J.

    2013-01-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling m

  20. Psychosocial adjustment to ALS: a longitudinal study

    Directory of Open Access Journals (Sweden)

    Tamara eMatuz

    2015-09-01

    Full Text Available For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham 1999 has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients’ quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals and coping strategies during a period of two years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years.Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.

  1. Frequency-Adaptive Modified Comb-Filter-Based Phase-Locked Loop for a Doubly-Fed Adjustable-Speed Pumped-Storage Hydropower Plant under Distorted Grid Conditions

    Directory of Open Access Journals (Sweden)

    Wei Luo

    2017-05-01

    Full Text Available The control system of a doubly-fed adjustable-speed pumped-storage hydropower plant needs phase-locked loops (PLLs to obtain the phase angle of grid voltage. The main drawback of a comb-filter-based phase-locked loop (CF-PLL is the slow dynamic response. This paper presents a modified comb-filter-based phase-locked loop (MCF-PLL by improving the pole-zero pattern of the comb filter, and gives the parameters’ setting method of the controller, based on the discrete model of MCF-PLL. In order to improve the disturbance resistibility of MCF-PLL when the power grid’s frequency changes, this paper proposes a frequency-adaptive modified, comb-filter-based, phase-locked loop (FAMCF-PLL and its digital implementation scheme. Experimental results show that FAMCF-PLL has good steady-state and dynamic performance under distorted grid conditions. Furthermore, FAMCF-PLL can determine the phase angle of the grid voltage, which is locked when it is applied to a doubly-fed adjustable-speed pumped-storage hydropower experimental platform.

  2. Zebrafish collective behaviour in heterogeneous environment modeled by a stochastic model based on visual perception

    CERN Document Server

    Collignon, Bertrand; Halloy, José

    2015-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impel to revise classical assumptions made in decisional algorithms. In this context, we present a new model describing the three dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a new stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as it is classically assumed by most models. We show that this model can spontaneously transits from consensus to choice. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based a...

  3. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    Science.gov (United States)

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  4. Intelligent model-based OPC

    Science.gov (United States)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm

  5. The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market

    DEFF Research Database (Denmark)

    Andersen, Signe Hald; Hansen, Lars Gårn

    Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... realigns the model’s predictions with the observed trends. The extension builds on Becker’s own claim that partners match on preference for partner specialization, and, as a novelty, on additional sociological theory claiming that preference coordination tend to happen subconsciously. When we incorporate...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...

  6. Sliding Adjustment for 3D Video Representation

    Directory of Open Access Journals (Sweden)

    Galpin Franck

    2002-01-01

    Full Text Available This paper deals with video coding of static scenes viewed by a moving camera. We propose an automatic way to encode such video sequences using several 3D models. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. We show that several independent 3D models provide the same functionalities as one single 3D model, and avoid some drawbacks of the previous approaches. To achieve this goal we propose a novel algorithm of sliding adjustment, which ensures consistency of successive 3D models. The paper presents a method to automatically extract the set of 3D models and associate camera positions. The obtained representation can be used for reconstructing the original sequence, or virtual ones. It also enables 3D functionalities such as synthetic object insertion, lightning modification, or stereoscopic visualization. Results on real video sequences are presented.

  7. Density-dependent sex ratio adjustment and the allee effect: a model and a test using a sex-changing fish.

    Science.gov (United States)

    Walker, Stefan P W; Thibaut, Loïc; McCormick, Mark I

    2010-09-01

    Positive density dependence (i.e., the Allee effect; AE) often has important implications for the dynamics and conservation of populations. Here, we show that density-dependent sex ratio adjustment in response to sexual selection may be a common AE mechanism. Specifically, using an analytical model we show that an AE is expected whenever one sex is more fecund than the other and sex ratio bias toward the less fecund sex increases with density. We illustrate the robustness of this pattern, using Monte Carlo simulations, against a range of body size-fecundity relationships and sex-allocation strategies. Finally, we test the model using the sex-changing polygynous reef fish Parapercis cylindrica; positive density dependence in the strength of sexual selection for male size is evidenced as the causal mechanism driving local sex ratio adjustment, hence the AE. Model application may extend to invertebrates, reptiles, birds, and mammals, in addition to over 70 reef fishes. We suggest that protected areas may often outperform harvest quotas as a conservation tool since the latter promotes population fragmentation, reduced polygyny, a balancing of the sex ratio, and hence up to a 50% decline in per capita fecundity, while the former maximizes polygyny and source-sink potential.

  8. A ROBUST ADAPTIVE VIDEO ENCODER BASED ON HUMAN VISUAL MODEL

    Institute of Scientific and Technical Information of China (English)

    Yin Hao; Zhang Jiangshan; Zhu Yaoting; Zhu Guangxi

    2003-01-01

    A Robust Adaptive Video Encoder (RAVE) based on human visual model is proposed. The encoder combines the best features of Fine Granularity Scalable (FGS) coding, framedropping coding, video redundancy coding, and human visual model. According to packet loss and available bandwidth of the network, the encoder adjust the output bit rate by jointly adapting quantization step-size instructed by human visual model, rate shaping, and periodically inserting key frame. The proposed encoder is implemented based on MPEG-4 encoder and is compared with the case of a conventional FGS algorithm. It is shown that RAVE is a very efficient robust video encoder that provides improved visual quality for the receiver and consumes equal or less network resource. Results are confirmed by subjective tests and simulation tests.

  9. A ROBUST ADAPTIVE VIDEO ENCODER BASED ON HUMAN VISUAL MODEL

    Institute of Scientific and Technical Information of China (English)

    YinHao; ZhangJiangshan

    2003-01-01

    A Robust Adaptive Video Encoder (RAVE) based on human visual model is proposed.The encoder combines the best features of Fine Granularity Scalabla (FGS) coding,frame-dropping coding,video redundancy coding,and human visual model.According to packet loss and available bandwidth of the network,the encoder adjust the output bit rate by jointly adapting quantization step-size instructed by human visual model,rate shaping,and periodically inserting key frame.The proposed encoder is implemented based on MPEG-4 encoder and is compared with the case of a conventional FGS algorithm.It is shown that RAVE is a very efficient robust videl encoder that provides improved visual quality for the receiver and consumes equal or less network resource.Results are confirmed by subjective tests and simulation tests.

  10. Close antiplatelet therapy monitoring and adjustment based upon thrombelastography may reduce late-onset bleeding in HeartMate II recipients.

    Science.gov (United States)

    Karimi, Ashkan; Beaver, Thomas M; Hess, Philip J; Martin, Tomas D; Staples, Edward D; Schofield, Richard S; Hill, James A; Aranda, Juan M; Klodell, Charles T

    2014-04-01

    Bleeding is the most common complication of HeartMate II and is partially attributable to platelet dysfunction; however, antiplatelet therapy is arbitrary in most centres. We investigated how antiplatelet therapy adjustment with thrombelastography affects late-onset bleeding. Thrombelastography was used to adjust antiplatelet therapy in 57 HeartMate II recipients. Kaplan-Meier survival curves and Cox proportional hazard ratio model were used to identify predictors of late-onset bleeding in univariate and multivariate analysis. Finally, late-onset bleeding rate in our study was compared with the reported rates in other studies in the literature, all of which did not use any test to monitor or adjust antiplatelet therapy. Mean follow-up was 347 days. Eighteen late-onset bleeding events occurred in 12 patients, a late-onset bleeding rate of 12/57 (21%) or 0.21 events/patient-year. The Kaplan-Meier survival curves demonstrated that late-onset bleeding was more common in the destination therapy cohort (P = 0.02), in patients older than 60 years (P = 0.04) and in females (P = 0.01), none of which was significant in multivariate analysis at a significance level of 0.05. To further investigate the higher bleeding rate in elderly patients, thrombelastography parameters were compared between younger and older patients at the age cut-off of 60 years which demonstrated a prothrombotic change the day after device implantation in younger patients that was absent in the elderly. There was also a trend towards higher requirement for antiplatelet therapy in younger patients while on device support, but the difference did not reach statistical significance. The average late-onset or gastrointestinal bleeding rate among seven comparable studies in the literature that did not use any monitoring test to adjust antiplatelet therapy was 0.49 events/patient-year. Our study implicates that antiplatelet therapy adjustment with thrombelastography may reduce late-onset bleeding rate in Heart

  11. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel †

    Science.gov (United States)

    Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu

    2015-01-01

    In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology. PMID:26690444

  12. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel

    Directory of Open Access Journals (Sweden)

    Emmanuel Moisan

    2015-12-01

    Full Text Available In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part and sonar (for its underwater part scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology.

  13. Dynamic Stiffness Identification of Adjustable Stiffness Joint Based on Decoupling and Linearization%基于解耦线性化的变刚度关节动态刚度辨识

    Institute of Scientific and Technical Information of China (English)

    尹鹏; 李满天; 查富生; 王鹏飞; 孙立宁

    2015-01-01

    In order to take advantage of adjustable stiffness joint to adjust robot's dynamic feature, it is necessary to effectively identify and control the dynamic stiffness of the joint. Firstly, a simplified model is derived based on the structure features of robotic adjustable stiffness joint, and the assumption of stiffness output form is made. Then the torque related parameters in the model are decoupled, to eliminate the effect of adjusting parameter of joint stiffness on the torque, and thus the unified torque expression for stiffness identification is acquired. Linearization of the unified torque expression is then carried out by utilizing Tailor expansion, and Kalman filter is applied to optimizing the factors of the expansion. Based on this, the joint dynamic stiffness identification is achieved. It is proved the identification error is controlled within ±2%by the dynamic stiffness online identification method in simulation. Based on the result of dynamic stiffness identification, feedforward based joint stiffness closed-loop control method is then studied. Simulation experiments show that the method is effective for robotic joint stiffness closed-loop control.%为了能够利用变刚度关节实现对机器人动态特性的调整,需要对关节的动态刚度进行有效的辨识和控制.本文首先根据机器人变刚度关节的结构特点建立了简化模型,并对其刚度输出特性表达做出假设;然后对模型中的力矩相关参数进行解耦,消除了关节刚度调节参数对力矩的影响,获取与刚度辨识相关的归一化力矩;利用泰勒展开对归一化力矩进行线性化处理,采用卡尔曼滤波器进行了系数优化,并进一步实现了对关节动态刚度的辨识.仿真中该刚度在线辨识方法可以将辨识误差控制在±2%以内,在实现动态刚度辨识的基础上研究了基于前馈的刚度闭环控制方法,通过仿真实验验证了该方法对于机器人关节刚度闭环控制是有效的.

  14. A longitudinal examination of the Adaptation to Poverty-Related Stress Model: predicting child and adolescent adjustment over time.

    Science.gov (United States)

    Wadsworth, Martha E; Rindlaub, Laura; Hurwich-Reiss, Eliana; Rienks, Shauna; Bianco, Hannah; Markman, Howard J

    2013-01-01

    This study tests key tenets of the Adaptation to Poverty-related Stress Model. This model (Wadsworth, Raviv, Santiago, & Etter, 2011 ) builds on Conger and Elder's family stress model by proposing that primary control coping and secondary control coping can help reduce the negative effects of economic strain on parental behaviors central to the family stress model, namely, parental depressive symptoms and parent-child interactions, which together can decrease child internalizing and externalizing problems. Two hundred seventy-five co-parenting couples with children between the ages of 1 and 18 participated in an evaluation of a brief family strengthening intervention, aimed at preventing economic strain's negative cascade of influence on parents, and ultimately their children. The longitudinal path model, analyzed at the couple dyad level with mothers and fathers nested within couple, showed very good fit, and was not moderated by child gender or ethnicity. Analyses revealed direct positive effects of primary control coping and secondary control coping on mothers' and fathers' depressive symptoms. Decreased economic strain predicted more positive father-child interactions, whereas increased secondary control coping predicted less negative mother-child interactions. Positive parent-child interactions, along with decreased parent depression and economic strain, predicted child internalizing and externalizing over the course of 18 months. Multiple-group models analyzed separately by parent gender revealed, however, that child age moderated father effects. Findings provide support for the adaptation to poverty-related stress model and suggest that prevention and clinical interventions for families affected by poverty-related stress may be strengthened by including modules that address economic strain and efficacious strategies for coping with strain.

  15. Optimal pricing decision model based on activity-based costing

    Institute of Scientific and Technical Information of China (English)

    王福胜; 常庆芳

    2003-01-01

    In order to find out the applicability of the optimal pricing decision model based on conventional costbehavior model after activity-based costing has given strong shock to the conventional cost behavior model andits assumptions, detailed analyses have been made using the activity-based cost behavior and cost-volume-profitanalysis model, and it is concluded from these analyses that the theory behind the construction of optimal pri-cing decision model is still tenable under activity-based costing, but the conventional optimal pricing decisionmodel must be modified as appropriate to the activity-based costing based cost behavior model and cost-volume-profit analysis model, and an optimal pricing decision model is really a product pricing decision model construc-ted by following the economic principle of maximizing profit.

  16. Differential geometry based multiscale models.

    Science.gov (United States)

    Wei, Guo-Wei

    2010-08-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are

  17. Annual Adjustment Factors

    Data.gov (United States)

    Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...

  18. Investing in Lead-Time Variability Reduction in a Quality-Adjusted Inventory Model with Finite-Range Stochastic Lead-Time

    Directory of Open Access Journals (Sweden)

    John Affisco

    2008-04-01

    Full Text Available We study the impact of the efforts aimed at reducing the lead-time variability in a quality-adjusted stochastic inventory model. We assume that each lot contains a random number of defective units. More specifically, a logarithmic investment function is used that allows investment to be made to reduce lead-time variability. Explicit results for the optimal values of decision variables as well as optimal value of the variance of lead-time are obtained. A series of numerical exercises is presented to demonstrate the use of the models developed in this paper. Initially the lead-time variance reduction model (LTVR is compared to the quality-adjusted model (QA for different values of initial lead-time over uniformly distributed lead-time intervals from one to seven weeks. In all cases where investment is warranted, investment in lead-time reduction results in reduced lot sizes, variances, and total inventory costs. Further, both the reduction in lot-size and lead-time variance increase as the lead-time interval increases. Similar results are obtained when lead-time follows a truncated normal distribution. The impact of proportion of defective items was also examined for the uniform case resulting in the finding that the total inventory related costs of investing in lead-time variance reduction decrease significantly as the proportion defective decreases. Finally, the results of sensitivity analysis relating to proportion defective, interest rate, and setup cost show the lead-time variance reduction model to be quite robust and representative of practice.

  19. A Neuron Model Based Ultralow Current Sensor System for Bioapplications

    Directory of Open Access Journals (Sweden)

    A. K. M. Arifuzzman

    2016-01-01

    Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.

  20. An Evaluation of the Adjusted DeLone and McLean Model of Information Systems Success; the case of financial information system in Ferdowsi University of Mashhad

    Directory of Open Access Journals (Sweden)

    Mohammad Lagzian

    2012-07-01

    Full Text Available Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS in an Iranian University. The relationships among the dimensions in an extended systems success measurement framework were tested. Data were collected by questionnaire from end-users of a financial information system at Ferdowsi University of Mashhad. The adjusted DeLone and McLean model was contained five variables (system quality, information quality, system use, user satisfaction, and individual impact. The results revealed that system quality was significant predictor of system use, user satisfaction and individual impact. Information quality was also a significant predictor of user satisfaction and individual impact, but not of system use. System use and user satisfaction were positively related to individual impact. The influence of user satisfaction on system use was insignificant