WorldWideScience

Sample records for adjustment model based

  1. Dynamics of expert adjustment to model-based forecast

    Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)

    2007-01-01

    textabstractExperts often add domain knowledge to model-based forecasts while aiming to reduce forecast errors. Indeed, there is some empirical evidence that expert-adjusted forecasts improve forecast quality. However, surprisingly little is known about what experts actually do. Based on a large and

  2. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  3. Adjusting Lidar-Derived Digital Terrain Models in Coastal Marshes Based on Estimated Aboveground Biomass Density

    Stephen Medeiros

    2015-03-01

    Full Text Available Digital elevation models (DEMs derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red, an interferometric SAR (IfSAR digital surface model, and lidar-derived canopy height to classify biomass density using both a three- class scheme (high, medium and low and a two-class scheme (high and low. Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.

  4. Economic analysis of coal price-electricity price adjustment in China based on the CGE model

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence.

  5. Economic analysis of coal price. Electricity price adjustment in China based on the CGE model

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)

  6. Economic analysis of coal price-electricity price adjustment in China based on the CGE model

    Y.X. He; S.L. Zhang; L.Y. Yang; Y.J. Wang; J. Wang [North China Electric Power University, Beijing (China). School of Business Administration

    2010-11-15

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. 16 refs., 3 figs., 7 tabs.

  7. Economic analysis of coal price. Electricity price adjustment in China based on the CGE model

    He, Y.X.; Yang, L.Y.; Wang, Y.J.; Wang, J. [School of Business Administration, North China Electric Power University, Zhu Xin Zhuang, Bei Nong Lu No. 2, Changping District, Beijing (China); Zhang, S.L. [Finance Department, Nanning Power Supply Bureau, Xingguang Street No. 43, Nanning, Guangxi Autonomous Region (China)

    2010-11-15

    In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)

  8. Multi-Period Model of Portfolio Investment and Adjustment Based on Hybrid Genetic Algorithm

    RONG Ximin; LU Meiping; DENG Lin

    2009-01-01

    This paper proposes a multi-period portfolio investment model with class constraints, transaction cost, and indivisible securities. When an investor joins the securities market for the first time, he should decide on portfolio investment based on the practical conditions of securities market. In addition, investors should adjust the portfolio according to market changes, changing or not changing the category of risky securities. Markowitz mean-variance approach is applied to the multi-period portfolio selection problems. Because the sub-models are optimal mixed integer program, whose objective function is not unimodal and feasible set is with a particular structure, traditional optimization method usually fails to find a globally optimal solution. So this paper employs the hybrid genetic algorithm to solve the problem. Investment policies that accord with finance market and are easy to operate for investors are put forward with an illustration of application.

  9. ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS

    Krasil'nikov Vitaliy Mikhaylovich

    2012-10-01

    Full Text Available The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued in respect of the «Application of water resource regulations governing the operation of waterworks facilities of power plants». The technology described there is obsolete due to the availability of specialized software. The computer technique is based on a digital terrain model. The authors provide an overview of the technique implemented at Rybinsk and Gorkiy water basins in this article. Thus, the digital terrain model generated on the basis of the field data is used at Gorkiy water basin, while the model based on maps and charts is applied at Rybinsk water basin. The authors believe that the software technique can be applied to any other water basin on the basis of the analysis and comparison of morphometric characteristics of the two water basins.

  10. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  11. ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS

    Krasil'nikov Vitaliy Mikhaylovich; Sobol' Il'ya Stanislavovich

    2012-01-01

    The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued i...

  12. Controlling fractional order chaotic systems based on Takagi-Sugeno fuzzy model and adaptive adjustment mechanism

    In this Letter, a kind of novel model, called the generalized Takagi-Sugeno (T-S) fuzzy model, is first developed by extending the conventional T-S fuzzy model. Then, a simple but efficient method to control fractional order chaotic systems is proposed using the generalized T-S fuzzy model and adaptive adjustment mechanism (AAM). Sufficient conditions are derived to guarantee chaos control from the stability criterion of linear fractional order systems. The proposed approach offers a systematic design procedure for stabilizing a large class of fractional order chaotic systems from the literature about chaos research. The effectiveness of the approach is tested on fractional order Roessler system and fractional order Lorenz system.

  13. An explanatory model of adjustment to type I diabetes based on attachment, coping, and self-regulation theories.

    Bazzazian, S; Besharat, M A

    2012-01-01

    The aim of this study was to develop and test a model of adjustment to type I diabetes. Three hundred young adults (172 females and 128 males) with type I diabetes were asked to complete the Adult Attachment Inventory (AAI), the Brief Illness Perception Questionnaire (Brief IPQ), Task-oriented subscale of the Coping Inventory for Stressful Situations (CISS), D-39, and well-being subscale of the Mental Health Inventory (MHI). HbA1c was obtained from laboratory examination. Results from structural equation analysis partly supported the hypothesized model. Secure and avoidant attachment styles were found to have effects on illness perception, ambivalent attachment style did not have significant effect on illness perception. Three attachment styles had significant effect on task-oriented coping strategy. Avoidant attachment had negative direct effect on adjustment too. Regression effects of illness perception and task-oriented coping strategy on adjustment were positive. Therefore, positive illness perception and more usage of task-oriented coping strategy predict better adjustment to diabetes. So, the results confirmed the theoretical bases and empirical evidence of effectiveness of attachment styles in adjustment to chronic disease and can be helpful in devising preventive policies, determining high-risk maladjusted patients, and planning special psychological treatment. PMID:21678193

  14. Experts' adjustment to model-based forecasts: Does the forecast horizon matter?

    Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)

    2007-01-01

    textabstractExperts may have domain-specific knowledge that is not included in a statistical model and that can improve forecasts. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional me

  15. Does experts' adjustment to model-based forecasts contribute to forecast quality?

    Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)

    2007-01-01

    textabstractWe perform a large-scale empirical analysis of the question whether model-based forecasts can be improved by adding expert knowledge. We consider a huge database of a pharmaceutical company where the head office uses a statistical model to generate monthly sales forecasts at various hori

  16. Adjustment or updating of models

    D J Ewins

    2000-06-01

    In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.

  17. Bulk Density Adjustment of Resin-Based Equivalent Material for Geomechanical Model Test

    Pengxian Fan; Haozhe Xing; Linjian Ma; Kaifeng Jiang; Mingyang Wang; Zechen Yan; Xiang Fang

    2015-01-01

    An equivalent material is of significance to the simulation of prototype rock in geomechanical model test. Researchers attempt to ensure that the bulk density of equivalent material is equal to that of prototype rock. In this work, barite sand was used to increase the bulk density of a resin-based equivalent material. The variation law of the bulk density was revealed in the simulation of a prototype rock of a different bulk density. Over 300 specimens were made for uniaxial compression test....

  18. Overpaying morbidity adjusters in risk equalization models.

    van Kleef, R C; van Vliet, R C J A; van de Ven, W P M M

    2016-09-01

    Most competitive social health insurance markets include risk equalization to compensate insurers for predictable variation in healthcare expenses. Empirical literature shows that even the most sophisticated risk equalization models-with advanced morbidity adjusters-substantially undercompensate insurers for selected groups of high-risk individuals. In the presence of premium regulation, these undercompensations confront consumers and insurers with incentives for risk selection. An important reason for the undercompensations is that not all information with predictive value regarding healthcare expenses is appropriate for use as a morbidity adjuster. To reduce incentives for selection regarding specific groups we propose overpaying morbidity adjusters that are already included in the risk equalization model. This paper illustrates the idea of overpaying by merging data on morbidity adjusters and healthcare expenses with health survey information, and derives three preconditions for meaningful application. Given these preconditions, we think overpaying may be particularly useful for pharmacy-based cost groups. PMID:26420555

  19. Anchoring Adjusted Capital Asset Pricing Model

    Hammad, Siddiqi

    2015-01-01

    An anchoring adjusted Capital Asset Pricing Model (ACAPM) is developed in which the payoff volatilities of well-established stocks are used as starting points that are adjusted to form volatility judgments about other stocks. Anchoring heuristic implies that such adjustments are typically insufficient. ACAPM converges to CAPM with correct adjustment, so CAPM is a special case of ACAPM. The model provides a unified explanation for the size, value, and momentum effects in the stock market. A ke...

  20. Modeling techniques and processes control application based on Neural Networks with on-line adjustment using Genetic Algorithms

    R. F. Marcolla

    2009-03-01

    Full Text Available In this work a strategy is presented for the temperature control of the polymerization reaction of styrene in suspension in batch. A three-layer feed forward Artificial Neural Network was trained in an off-line way starting from a removed group of patterns of the experimental system and applied in the recurrent form (RNN to a Predictive Controller based on a Nonlinear Model (NMPC. This controller presented very superior results to the classic controller PID in the maintenance of the temperature. Still to improve the performance of the model used by NMPC (RNN that can present differences in relation to the system due to the dead time involved in the control actions, nonlinear characteristic of the system and variable dynamics; an on-line adjustment methodology of the parameters of the exit layer of the Network is implemented, presenting superior results and treating the difficulties satisfactorily in the temperature control. All the presented results are obtained for a real system.

  1. Effect of Flux Adjustments on Temperature Variability in Climate Models

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability

  2. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen

    2016-08-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.

  3. Assessment and adjustment of sea surface salinity products from Aquarius in the southeast Indian Ocean based onin situ measurement and MyOcean modeled data

    XIA Shenzhen; KE Changqing; ZHOU Xiaobing; ZHANG Jie

    2016-01-01

    Thein situ sea surface salinity (SSS) measurements from a scientific cruise to the western zone of the southeast Indian Ocean covering 30°-60°S, 80°-120°E are used to assess the SSS retrieved from Aquarius (Aquarius SSS). Wind speed and sea surface temperature (SST) affect the SSS estimates based on passive microwave radiation within the mid- to low-latitude southeast Indian Ocean. The relationships among thein situ, Aquarius SSS and wind-SST corrections are used to adjust the Aquarius SSS. The adjusted Aquarius SSS are compared with the SSS data from MyOcean model. Results show that: (1) Before adjustment: compared with MyOcean SSS, the Aquarius SSS in most of the sea areas is higher; but lower in the low-temperature sea areas located at the south of 55°S and west of 98°E. The Aquarius SSS is generally higher by 0.42 on average for the southeast Indian Ocean. (2) After adjustment: the adjustment greatly counteracts the impact of high wind speeds and improves the overall accuracy of the retrieved salinity (the mean absolute error of the Zonal mean is improved by 0.06, and the mean error is -0.05 compared with MyOcean SSS). Near the latitude 42°S, the adjusted SSS is well consistent with the MyOcean and the difference is approximately 0.004.

  4. Dynamic gauge adjustment of high-resolution X-band radar data for convective rain storms: Model-based evaluation against measured combined sewer overflow

    Borup, Morten; Grum, Morten; Linde, Jens Jørgen;

    2016-01-01

    Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling......, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10–20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2–3 km away....

  5. BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION

    Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning

    2004-01-01

    The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.

  6. Adjustment method of parameters intended for first-principle models

    P. Czop

    2012-12-01

    Full Text Available Purpose: This paper demonstrates a process of estimation phenomenological parameters of a first-principle nonlinear model based on the hydraulic damper system.Design/methodology/approach: First-principle (FP models are formulated using a system of continuous ordinary differential equations capturing usually nonlinear relations among variables of the model. The considering model applies three categories of parameters: geometrical, physical and phenomenological. Geometrical and physical parameters are deduced from construction or operational documentation. The phenomenological parameters are the adjustable ones, which are estimated or adjusted based on their roughly known values, e.g. friction/damping coefficients. Findings: A phenomenological parameter, friction coefficient, was successfully estimated based on the experimental data. The error between the model response and experimental data is not greater than 10%.Research limitations/implications: Adjusting a model to data is, in most cases, a non-convex optimization problem and the criterion function may have several local minima. This is a case when multiple parameters are simultaneously estimated. Practical implications: First-principle models are fundamental tools for understanding, optimizing, designing, and diagnosing technical systems since they are updatable using operational measurements.Originality/value: First-principle models are frequently adjusted by trial-and-error, which can lead to nonoptimal results. In order to avoid deficiencies of the trial-and-error approach, a formalized mathematical method using optimization techniques to minimize the error criterion, and find optimal values of tunable model parameters, was proposed and demonstrated in this work.

  7. OPEC model : adjustment or new model

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  8. R.M. Solow Adjusted Model of Economic Growth

    Ion Gh. Rosca

    2007-05-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  9. Research on the adjustment model of ventilation characteristic parameters based on integrated method of circuit and path

    Ting-gui JIA; Shu-gang WANG; Guo-na QU; Jian LIU

    2013-01-01

    Ventilation characteristic parameters are the base of ventilation network solution; however,they are apt to be affected by operating errors,reading errors,airflow stability,and other factors,and it is difficult to obtain accurate results.In order to check the ventilation characteristic parameters of mines more accurately,the integrated method of circuit and path is adopted to overcome the drawbacks caused by the traditional path method or circuit method in the digital debugging process of ventilation system,which can improve the large local error or the inconsistency between the airflow direction and the actual situation caused by inaccuracy of the ventilation characteristic parameters or checking in the ventilation network solution.The results show that this method can effectively reduce the local error and prevent the pseudo-airflow reversal phenomenon; in addition,the solution results are consistent with the actual situation of mines,and the effect is obvious.

  10. An adjustment cost model of distributional dynamics.

    Getachew, Yoseph; Basu, Parantap

    2012-01-01

    We analyze the distributional e¤ects of adjustment cost in an environment with incomplete capital market. We find that a higher adjustment cost for human capital acquisition slows down the intergenerational mobility and results in a persistent inequality across generations. A low depreciation cost of human capital contributes to longer life of the capital which could elevate this adjustment cost and hence contribute to this inequality persistence. A lower total factor productivity could hurt...

  11. Modeling of Turbulent Boundary Layer Surface Pressure Fluctuation Auto and Cross Spectra - Verification and Adjustments Based on TU-144LL Data

    Rackl, Robert; Weston, Adam

    2005-01-01

    The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.

  12. Capital Asset Pricing Model Adjusted for Anchoring

    Hammad, Siddiqi

    2015-01-01

    I show that adjusting CAPM for anchoring provides a unified explanation for the size, value, and momentum effects. Anchoring adjusted CAPM (ACAPM) predicts that stock splits are associated with positive abnormal returns and an increase in return volatility, whereas the reverse stock-splits are associated with negative abnormal returns and a fall in return volatility. Existing empirical evidence strongly supports these predictions. Anchoring has the effect of pushing up the equity premium, a ...

  13. Adjusting Economic of the Romania’s GDP Using Econometric Model of the System: Budget Expenditure - GDP

    Nadia Stoicuţa; Ana Maria Giurgiulescu; Olimpiu Stoicuţa

    2009-01-01

    The paper presents a model of economic adjustment Romania’s GDP using econometric model which has the budgetary input and output as Romania's GDP. Adjustment shall be based on a square linear regulator by type discrete.

  14. A Glacial Isostatic Adjustment Model for the Central and Northern Laurentide Ice Sheet based on Relative Sea-level and GPS Measurements

    Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.

    2016-03-01

    The thickness and equivalent global sea-level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea-level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5 G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30% decrease and an average ˜20-25% increase to the load thickness relative to the ICE-5 G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5 G, and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5 G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6 G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea-level equivalent compared to ICE-5 G at LGM.

  15. A glacial isostatic adjustment model for the central and northern Laurentide Ice Sheet based on relative sea level and GPS measurements

    Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.

    2016-06-01

    The thickness and equivalent global sea level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30 per cent decrease and an average ˜20-25 per cent increase to the load thickness relative to the ICE-5G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5G and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea level equivalent compared to ICE-5G at LGM.

  16. The high-density lipoprotein-adjusted SCORE model worsens SCORE-based risk classification in a contemporary population of 30 824 Europeans

    Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G;

    2015-01-01

    AIMS: Recent European guidelines recommend to include high-density lipoprotein (HDL) cholesterol in risk assessment for primary prevention of cardiovascular disease (CVD), using a SCORE-based risk model (SCORE-HDL). We compared the predictive performance of SCORE-HDL with SCORE in an independent...... with SCORE, but deteriorated risk classification based on NRI. Future guidelines should consider lower decision thresholds and prioritize CVD morbidity and people above age 65....

  17. Bayes linear covariance matrix adjustment for multivariate dynamic linear models

    Wilkinson, Darren J

    2008-01-01

    A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.

  18. Constructing stochastic models from deterministic process equations by propensity adjustment

    Wu Jialiang

    2011-11-01

    Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic

  19. Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)

    The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...

  20. Phenomenological Quark Mass Matrix Model with Two Adjustable Parameters

    Koide, Yoshio

    1993-01-01

    A phenomenological quark mass matrix model which includes only two adjustable parameters is proposed from the point of view of the unification of quark and lepton mass matrices. The model can provide reasonable values of quark mass ratios and Kobayashi-Maskawa matrix parameters.

  1. Model-Based Estimates of the Effects of Efavirenz on Bedaquiline Pharmacokinetics and Suggested Dose Adjustments for Patients Coinfected with HIV and Tuberculosis

    Svensson, Elin M.; Aweeka, Francesca; Park, Jeong-Gun; Marzan, Florence; Dooley, Kelly E; Karlsson, Mats O

    2013-01-01

    Safe, effective concomitant treatment regimens for tuberculosis (TB) and HIV infection are urgently needed. Bedaquiline (BDQ) is a promising new anti-TB drug, and efavirenz (EFV) is a commonly used antiretroviral. Due to EFV's induction of cytochrome P450 3A4, the metabolic enzyme responsible for BDQ biotransformation, the drugs are expected to interact. Based on data from a phase I, single-dose pharmacokinetic study, a nonlinear mixed-effects model characterizing BDQ pharmacokinetics and int...

  2. PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model

    Wasi Riyanto

    2013-07-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.

  3. An adjusted location model for SuperDARN backscatter echoes

    E. X. Liu

    2012-12-01

    Full Text Available The radars that form the Super Dual Auroral Radar Network (SuperDARN receive scatter from ionospheric irregularities in both the E- and F-regions, as well as the Earth's surface, either ground or sea. For ionospheric scatter, the current SuperDARN standard software considers a straight-line propagation from the radar to the scattering zone with an altitude assigned by a standard height model. The knowledge of the group delay to a scatter volume is not sufficient for an exact determination of the location of the irregularities. In this study, the difference between the locations of the backscatter echoes determined by SuperDARN standard software and by ray tracing has been evaluated, using the ionosonde data collected at Sodankylä, which is in the field-of-view of Hankasalmi SuperDARN radar. By studying elevation angle information of backscattered echoes from the data sets of Hankasalmi radar in 2008, we have proposed an adjusted fitting location model determined by slant range and elevation angle. To test the reliability of the adjusted model, an independent data set is selected in 2009. The result shows that the difference between the adjusted model and the ray tracing is significantly reduced and the adjusted model could provide a more accurate location for backscatter targets.

  4. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  5. Automatic Adjustment of Wide-Base Google Street View Panoramas

    Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.

    2016-06-01

    This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  6. Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model

    Wasi Riyanto

    2015-04-01

    Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis  a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice  and flour are not significant  to  changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the  government also began  to  socialize  in a lifestyle  of  non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population

  7. Thickness and Shape Synthetical Adjustment for DC Mill Based on Dynamic Nerve-Fuzzy Control

    JIA Chun-yu; WANG Ying-rui; ZHOU Hui-feng

    2004-01-01

    Due to the complexity of thickness and shape synthetical adjustment system and the difficulties to build a mathematical model, a thickness and shape synthetical adjustment scheme on DC mill based on dynamic nerve-fuzzy control was put forward, and a self-organizing fuzzy control model was established. The structure of the network can be optimized dynamically. In the course of studying, the network can automatically adjust its structure based on the specific questions and make its structure the optimal. The input and output of the network are fuzzy sets, and the trained network can complete the composite relation, the fuzzy inference. For decreasing the off-line training time of BP network, the fuzzy sets are encoded. The simulation results indicate that the self-organizing fuzzy control based on dynamic neural network is better than traditional decoupling PID control.

  8. Prediction of Insurance Loss Based on Zero-Adjusted Random-Effect Regression Models%基于随机效应零调整回归模型的保险损失预测

    孟生旺; 李政宵

    2015-01-01

    在非寿险精算中,对保单的累积损失进行预测是费率厘定的基础。在对累积损失进行预测时通常使用T w eedie回归模型。当损失观察数据中包含大量零索赔的保单时,T w eedie回归模型对零点的拟合容易出现偏差;若用零调整分布代替T w eedie分布,并在模型中引入连续型解释变量的平方函数,可以建立零调整回归模型;如果在零调整回归模型中将水平数较多的分类解释变量作为随机效应处理,可以进一步改善预测结果的合理性。基于一组机动车辆第三者责任保险的损失数据,将不同分布假设下的固定效应模型与随机效应模型进行对比,实证检验了随机效应零调整回归模型在保险损失预测中的优越性。%In classification ratemaking of general insurance ,the insurance company mainly focuses on predicting the aggregate claim losses of polices . The main method of predicting aggregate loss is establishing Tweedie regression model .However ,Tweedie regression model may produce large deviations w hen predicting zero‐claim numbers ,as the zero‐claim has a very high probability ,far greater than the probability at zero in Tweedie distribution .Based on the assumption that aggregate insurance loss follows zero‐adjusted distribution ,zero‐adjusted regression model can be established .If a categorical variable that contains too many levels is treated as random effect ,and also introduces quadratic function of continuous variables in the regression ,the accuracy of prediction can be further improved .Based on the empirical study of motor third‐party liability insurance loss ,several regression models with random or fixed effects are compared under different distributions ,the empirical results shows that zero‐adjusted random‐effect regression models have superiority in predicting insurance loss .

  9. Variance-based fingerprint distance adjustment algorithm for indoor localization

    Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang

    2015-01-01

    The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.

  10. Eighty years of observations on the adjusted monetary base: 1918-1997

    Richard G. Anderson; Robert H. Rasche

    1999-01-01

    Recent trends in empirical macroeconomic research - embedding long-run relationships in models via cointegration, modeling the correlation between seasonal cycles and business cycles, building endogenous growth models, and the interest of policymakers in inflation targeting - have increased the importance of long-time series of macroeconomic data. Among the more important of such data are quantitative measures of monetary policy, such as the adjusted monetary base. Previously published data f...

  11. Use of a physiologically-based pharmacokinetic model to simulate artemether dose adjustment for overcoming the drug-drug interaction with efavirenz

    Siccardi, Marco; Olagunju, Adeniyi; Seden, Kay; Ebrahimjee, Farid; Rannard, Steve; Back, David; Owen, Andrew

    2013-01-01

    Purpose To treat malaria, HIV-infected patients normally receive artemether (80 mg twice daily) concurrently with antiretroviral therapy and drug-drug interactions can potentially occur. Artemether is a substrate of CYP3A4 and CYP2B6, antiretrovirals such as efavirenz induce these enzymes and have the potential to reduce artemether pharmacokinetic exposure. The aim of this study was to develop an in vitro in vivo extrapolation (IVIVE) approach to model the interaction between efavirenz and ar...

  12. Capacitance-Based Frequency Adjustment of Micro Piezoelectric Vibration Generator

    Xinhua Mao

    2014-01-01

    Full Text Available Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.

  13. A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment

    Lamborn, Susie D.; Groh, Kelly

    2009-01-01

    We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…

  14. Emotional closeness to parents and grandparents: A moderated mediation model predicting adolescent adjustment.

    Attar-Schwartz, Shalhevet

    2015-09-01

    Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. PMID:26237053

  15. Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria

    Kimmel Marek

    2011-05-01

    Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.

  16. 基于加速参数自调整粒子群算法的物流配送优化模型%LOGISTICS DISTRIBUTION OPTIMISATION MODEL BASED ON ACCELERATION PARAMETERS SELF-ADJUSTED PSO

    黄日胜; 黄锡波

    2015-01-01

    We presented an improved particle swarm optimisation (PSO)algorithm with the acceleration parameters adjusted according to the individual fitness value,which is used to solve the multimodal premature problem of logistics distribution optimisation model.First,from the perspectives of algorithm behaviour analysis and vector analysis,we design a simple and practical acceleration parameters self-adjustment strategy according to current particle fitness and population optimal fitness value.Secondly,through theoretical and numerical analyses we get the global convergence conditions of the algorithm,and provide the theoretical basis for the practical applications of the algorithm.Finally,we study the logistics distribution model in combination with the improved PSO algorithm.Experiments show that the acceleration parameters self-adaptation strategy based on individual fitness value has good balance role on two important evolution processes of PSO in "deep development"and "global exploration".The improvement way of the algorithm is simple,without increasing its time complexity,and can effectively opti-mise the logistics distribution model.%提出一种加速参数随个体适应值调整的改进粒子群(PSO)算法用来解决物流配送模型优化的多峰早熟问题。首先,从算法行为分析和向量分析的角度,根据当前粒子适应值和种群最优适应值设计一种简单实用的加速参数自调整策略。其次,通过理论和数值分析进而得到算法的全局收敛条件,为算法的实际应用提供理论基础。最后,结合改进 PSO 算法对物流配送模型进行研究。实验表明,基于个体适应值的加速参数变化策略对于 PSO 算法的“深度开发”和“全局探索”两个重要进化过程具有很好的平衡作用。算法的改进方式简单,未增加算法的时间复杂性,可以有效地对物流配送模型进行优化。

  17. Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs

    Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.

    2004-01-01

    This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementa...

  18. Incremental Training for SVM-Based Classification with Keyword Adjusting

    SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo

    2004-01-01

    This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.

  19. Adjustable wideband reflective converter based on cut-wire metasurface

    Zhang, Linbo; Zhou, Peiheng; Chen, Haiyan; Lu, Haipeng; Xie, Jianliang; Deng, Longjiang

    2015-10-01

    We present the design, analysis, and measurement of a broadband reflective converter using a cut-wire (CW) metasurface. Based on the characteristics of LC resonances, the proposed reflective converter can rotate a linearly polarized (LP) wave into its cross-polarized wave at three resonance frequencies, or convert the LP wave to a circularly polarized (CP) wave at two other resonance frequencies. Furthermore, the broad-band properties of the polarization conversion can be sustained when the incident wave is a CP wave. The polarization states can be adjusted easily by changing the length and width of the CW. The measured results show that a polarization conversion ratio (PCR) over 85% can be achieved from 6.16 GHz to 16.56 GHz for both LP and CP incident waves. The origin of the polarization conversion is interpreted by the theory of microwave antennas, with equivalent impedance and electromagnetic (EM) field distributions. With its simple geometry and multiple broad frequency bands, the proposed converter has potential applications in the area of selective polarization control.

  20. Combinatorial Optimization Method for Operation of Pumping Station with Adjustable Blade and Variable Speed Based on Experimental Optimization of Subsystem

    Yi Gong; Jilin Cheng

    2014-01-01

    A decomposition-dynamic programming aggregation method based on experimental optimization for subsystem was proposed to solve mathematical model of optimal operation for single pumping station with adjustable blade and variable speed. Taking minimal daily electric cost as objective function and water quantity pumped by units as coordinated variable, this model was decomposed into several submodels of daily optimal operation with adjustable blade and variable speed for single pump unit which w...

  1. The Promoting Role of Financial Development on Industrial Structure Adjustment Based on VAR Model%金融发展对促进产业结构调整的作用分析--基于VAR模型

    时文龙; 曹臣

    2014-01-01

    Analyzing empirically the relationship between financial development and industrial structure adjustment based on indicator data from 1978 to 2013 and VAR model, we found that their relationship is bidirectional causality and financial development has an obvi-ous positive effect on the industrial restructuring in the long run. In order to give full play to their promoting role, the government should deepen the financial system, improve corporate governance structure, enhance the ability to withstand risks and gradually improve the overall strength. Meanwhile, the government should speed up equity structural reforms, enable businesses to mergers and acquisitions by means of hold shares and acquisitions, and increase financial support for technological transformation and product structure adjustment.%采用我国1978-2013年反映金融发展和产业结构调整的指标数据,基于VAR模型进行实证分析金融发展与产业结构调整之间的关系,研究发现我国金融发展与产业结构调整之间存在着双向的因果关系,从长期来看,金融发展对产业结构调整具有明显的正向效应。为充分发挥我国金融发展与产业结构调整的相互促进的作用,应不断深化金融制度,完善金融行业的法人治理结构,增强金融机构抵御风险的能力,逐步提升其整体实力。同时,加快企业的股权结构改革,使企业能够以参股、控股、收购等方法进行各种形式的兼并重组,对兼并重组的企业在进行技术改造与产品结构调整方面加大金融支持力度。

  2. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  3. Setting of Agricultural Insurance Premium Rate and the Adjustment Model

    Huang, Ya-Lin

    2012-01-01

    First, using the law of large numbers, we analyze the setting principle of agricultural insurance premium rate, and take the case of setting of adult sow premium rate for study, to draw the conclusion that with the continuous promotion of agricultural insurance, increase in the types of agricultural insurance and increase in the number of the insured, the premium rate should also be adjusted opportunely. Then, on the basis of Bayes' theorem, we adjust and calibrate the claim frequency and the...

  4. Setting of Agricultural Insurance Premium Rate and the Adjustment Model

    HUANG Ya-lin

    2012-01-01

    First,using the law of large numbers,I analyze the setting principle of agricultural insurance premium rate,and take the case of setting of adult sow premium rate for study,to draw the conclusion that with the continuous promotion of agricultural insurance,increase in the types of agricultural insurance and increase in the number of the insured,the premium rate should also be adjusted opportunely.Then,on the basis of Bayes’ theorem,I adjust and calibrate the claim frequency and the average claim,in order to correctly adjust agricultural insurance premium rate;take the case of forest insurance for premium rate adjustment analysis.In setting and adjustment of agricultural insurance premium rate,in order to make the expected results well close to the real results,it is necessary to apply the probability estimates in a large number of risk units;focus on the establishment of agricultural risk database,to timely adjust agricultural insurance premium rate.

  5. R.M. Solow Adjusted Model of Economic Growth

    Ion Gh. Rosca

    2007-05-01

    The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.

  6. Detailed Theoretical Model for Adjustable Gain-Clamped Semiconductor Optical Amplifier

    Lin Liu

    2012-01-01

    Full Text Available The adjustable gain-clamped semiconductor optical amplifier (AGC-SOA uses two SOAs in a ring-cavity topology: one to amplify the signal and the other to control the gain. The device was designed to maximize the output saturated power while adjusting gain to regulate power differences between packets without loss of linearity. This type of subsystem can be used for power equalisation and linear amplification in packet-based dynamic systems such as passive optical networks (PONs. A detailed theoretical model is presented in this paper to simulate the operation of the AGC-SOA, which gives a better understanding of the underlying gain clamping mechanics. Simulations and comparisons with steady-state and dynamic gain modulation experimental performance are given which validate the model.

  7. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  8. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    M. Casaioli

    2006-01-01

    Full Text Available In the framework of the European VOLTAIRE project (Fifth Framework Programme, simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  9. A finite element model updating technique for adjustment of parameters near boundaries

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  10. Adjustable ultraviolet sensitive detectors based on amorphous silicon

    TOPIC, M; Stiebig, H.; Krause, M.; Wagner, H.

    2001-01-01

    Thin-film detectors made of hydrogenated amorphous silicon (LI-Si:H) and amorphous silicon carbide (a-SiC:H) with adjustable sensitivity in the ultraviolet (UV) spectrum were developed. Thin PIN diodes deposited on glass substrates in N-I-P layer sequence with a total thickness of down to 33 nm and a semitransparent Ag front contact were fabricated. The optimized diodes with a 10 nm Ag contact exhibit spectral response values above 80 mA/W in the wavelength range from 295 to 395 nm with a max...

  11. FLC based adjustable speed drives for power quality enhancement

    Sukumar Darly

    2010-01-01

    Full Text Available This study describes a new approach based on fuzzy algorithm to suppress the current harmonic contents in the output of an inverter. Inverter system using fuzzy controllers provide ride-through capability during voltage sags, reduces harmonics, improves power factor and high reliability, less electromagnetic interference noise, low common mode noise and extends output voltage range. A feasible test is implemented by building a model of three-phase impedance source inverter, which is designed and controlled on the basis of proposed considerations. It is verified from the practical point of view that these new approaches are more effective and acceptable to minimize the harmonic distortion and improves the quality of power. Due to the complex algorithm, their realization often calls for a compromise between cost and performance. The proposed optimizing strategies may be applied in variable-frequency dc-ac inverters, UPSs, and ac drives.

  12. Processing Approach of Non-linear Adjustment Models in the Space of Non-linear Models

    LI Chaokui; ZHU Qing; SONG Chengfang

    2003-01-01

    This paper investigates the mathematic features of non-linear models and discusses the processing way of non-linear factors which contributes to the non-linearity of a nonlinear model. On the basis of the error definition, this paper puts forward a new adjustment criterion, SGPE.Last, this paper investigates the solution of a non-linear regression model in the non-linear model space and makes the comparison between the estimated values in non-linear model space and those in linear model space.

  13. Assessing climate change effects on long-term forest development: adjusting growth, phenology, and seed production in a gap model

    Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.

    2002-01-01

    The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4

  14. Permutation-Based Adjustments for the Significance of Partial Regression Coefficients in Microarray Data Analysis

    Wagner, Brandie D.; Zerbe, Gary O; Mexal, Sharon; Leonard, Sherry S.

    2008-01-01

    The aim of this paper is to generalize permutation methods for multiple testing adjustment of significant partial regression coefficients in a linear regression model used for microarray data. Using a permutation method outlined by Anderson and Legendre [1999] and the permutation P-value adjustment from Simon et al. [2004], the significance of disease related gene expression will be determined and adjusted after accounting for the effects of covariates, which are not restricted to be categori...

  15. Bias adjustment of satellite rainfall data through stochastic modeling: Methods development and application to Nepal

    Müller, Marc F.; Thompson, Sally E.

    2013-10-01

    Estimating precipitation over large spatial areas remains a challenging problem for hydrologists. Sparse ground-based gauge networks do not provide a robust basis for interpolation, and the reliability of remote sensing products, although improving, is still imperfect. Current techniques to estimate precipitation rely on combining these different kinds of measurements to correct the bias in the satellite observations. We propose a novel procedure that, unlike existing techniques, (i) allows correcting the possibly confounding effects of different sources of errors in satellite estimates, (ii) explicitly accounts for the spatial heterogeneity of the biases and (iii) allows the use of non overlapping historical observations. The proposed method spatially aggregates and interpolates gauge data at the satellite grid resolution by focusing on parameters that describe the frequency and intensity of the rainfall observed at the gauges. The resulting gridded parameters can then be used to adjust the probability density function of satellite rainfall observations at each grid cell, accounting for spatial heterogeneity. Unlike alternate methods, we explicitly adjust biases on rainfall frequency in addition to its intensity. Adjusted rainfall distributions can then readily be applied as input in stochastic rainfall generators or frequency domain hydrological models. Finally, we also provide a procedure to use them to correct remotely sensed rainfall time series. We apply the method to adjust the distributions of daily rainfall observed by the TRMM satellite in Nepal, which exemplifies the challenges associated with a sparse gauge network and large biases due to complex topography. In a cross-validation analysis on daily rainfall from TRMM 3B42 v6, we find that using a small subset of the available gauges, the proposed method outperforms local rainfall estimations using the complete network of available gauges to directly interpolate local rainfall or correct TRMM by adjusting

  16. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  17. The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs

    RAO Lan-lan; CAI Dong-han

    2004-01-01

    We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.

  18. Study on Posture Adjusting System of Spacecraft Based on Stewart Mechanism

    Gao, Feng; Feng, Wei; Dai, Wei-Bing; Yi, Wang-Min; Liu, Guang-Tong; Zheng, Sheng-Yu

    In this paper, the design principles of adjusting parallel mechanisms is introduced, including mechanical subsystem, control sub-system and software sub-system. According to the design principles, key technologies for system of adjusting parallel mechanisms are analyzed. Finally, design specifications for system of adjusting parallel mechanisms are proposed based on requirement of spacecraft integration and it can apply to cabin docking, solar array panel docking and camera docking.

  19. Demography-adjusted tests of neutrality based on genome-wide SNP data

    Rafajlović, Marina

    2014-08-01

    Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.

  20. Family support and acceptance, gay male identity formation, and psychological adjustment: a path model.

    Elizur, Y; Ziv, M

    2001-01-01

    While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men. PMID:11444052

  1. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    Yongjun Zhang

    2014-05-01

    Full Text Available In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs, which are measured by the integrated positioning and orientation system (POS of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  2. Concession period adjustment model for infrastructure BOT projects based on risk allocation%基于风险分担的基础设施BOT项目特许期调整模型

    宋金波; 宋丹荣; 富怡雯; 戴大双

    2012-01-01

    The criterion of concession period under the four conditions of shortening, extending, unchanged and invalid adjustment is proposed in order to make the government and project company share risk in infrastructural BOT projects. If NPVR (net present value ratio) exceeds the upper limit, the decision model is built for shortening concession period with the target of social welfare maximum. If the NPVR is under the lower limit, the decision model is built for extending concession period with the target of project company benefit maximum. Monte Carlo simulation is applied to a case for solution. Cumulative probability of realization of expected return is calculated and contrasted under different discount rate, so it proves the effectiveness of allocation of risk by the means of concession period adjustment when single price adjustment method is unsuitable.%为了使政府和项目公司分担基础设施BOT项目的风险,提出了特许期缩短、延长、不需调整和调整失效四种情况的判别条件,在项目净现值率超过上限的情况下,以社会福利最大化为目标,构建了缩短特许期决策模型;在项目净现值率低于下限的情况下,以项目公司收益最大化为目标,构建了延长特许期决策模型.应用蒙特卡罗模拟方法对实际案例加以求解分析,并在不同的折现率水平下对实现项目预期收益的累积概率进行测算和对比,证明了在“单一”调价方法不适用的情况下通过调整特许期进行风险分担的有效性.

  3. Testing a Social Ecological Model for Relations between Political Violence and Child Adjustment in Northern Ireland

    Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborh...

  4. Management Practices and Financial Performance of Agricultural Cooperatives: A Partial Adjustment Model

    Azzam, Azzeddine M.; Turner, Michael S.

    1991-01-01

    This paper uses the Nerlovian partial adjustment model to test the hypothesis that the rate of a cooperative's adjustment to a desired financial position is partially determined by its management practices. The results indicate that management practices that are board responsibilities are not contributing to the speed of adjustment in reaching the desired financial performance, which is the responsibility of the board of directors. But management, when independently pursuing management's resp...

  5. Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net

    2002-01-01

    In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.

  6. Structural Adjustment Policy Experiments: The Use of Philippine CGE Models

    Cororaton, Caesar B.

    1994-01-01

    This paper reviews the general structure of the following general computable general equilibrium (CGE): the APEX model, Habito’s second version of the PhilCGE model, Cororaton’s CGE model and Bautista’s first CGE model. These models are chosen as they represent the range of recently constructed CGE models of the Philippine economy. They also represent two schools of thought in CGE modeling: the well defined neoclassical, Walrasian, general equilibrium school where the market-clearing variable...

  7. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information.

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  8. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    Kaichang Di

    2016-08-01

    Full Text Available In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy.

  9. Fission-product cross section evaluation, integral tests and adjustment based on integral data

    Recent activities made by Fission-Product Nuclear Data Working Group in JNDC were briefed. This review consists of following three parts. 1. The JENDL-2 fission product data file was recently completed (Ref. 1) which contains neutron cross sections for 100 nuclides from Kr to Tb. The evaluation was made by using the latest data of capture cross sections and resonance parameters. The optical model parameters and level density parameters were re-evaluated. The results of the previous integral tests using the data of the STEK sample reactivity and CFRMF sample activation were also reflected on the evaluations. Details are reported in Ref. (2 ∼ 4). 2. The integral test of JENDL-2 fission-product cross sections is now in progress using the EBR-II sample irradiation data and the STEK and CFRMF data. The 70 group constants were generated by MINX code with the self-shielding factor tables. The values of the normal and adjoint fluxes and their uncertainties necessary for 70 group evaluation were obtained by spline fitting interpolation using the value of Ref. (14). 3. The adjustment of evaluated cross sections based on the integral data is also in progress using the Bayesian least-square method. The data adjustment will be made especially to (1) the nuclides which only integral data are available (e.g., Xe-131, 132, 134, Pm-147, Eu-152, 154) and to (2) those which the differential and integral data are mutually inconsistent (e.g., Tc-99, Ag-109, Eu-151, 153). The cross section covariances are generated by the ''strength function model'' taking into account of the statistical model uncertainty (Ref. 5). The uncertainties of neutron spectra and adjoint spectra were also taken into account as the ''method uncertainties''. Interium results of integral test and adjustment are presented and discussed. The near-future scope of the work and the plan for JENDL-3 are briefly described. (author)

  10. Impacts of parameters adjustment of relativistic mean field model on neutron star properties

    Analysis of the parameters adjustment effects in isovector as well as in isoscalar sectors of effective field based relativistic mean field (E-RMF) model in the symmetric nuclear matter and neutron-rich matter properties has been performed. The impacts of the adjustment on slowly rotating neutron star are systematically investigated. It is found that the mass–radius relation obtained from adjusted parameter set G2** is compatible not only with neutron stars masses from 4U 0614+09 and 4U 1636-536, but also with the ones from thermal radiation measurement in RX J1856 and with the radius range of canonical neutron star of X7 in 47 Tuc, respectively. It is also found that the moment inertia of PSR J073-3039A and the strain amplitude of gravitational wave at the Earth's vicinity of PSR J0437-4715 as predicted by the E-RMF parameter sets used are in reasonable agreement with the extracted constraints of these observations from isospin diffusion data. (author)

  11. Dynamic Adjustment of Third Party Logistics Liability Insurance Premium Based NCD Model%基于NCD模型的第三方物流责任保险费率的动态调整

    霍艳芳; 付叶; 杨立向

    2013-01-01

    In view of the problems compromising the widespread adoption of the logistics liability insurance in China, such as high premium and unscientific calculation basis, we proposed on the microscopic level the plan of solution, namely introducing the NCD dynamic charging model into the adjustment of the premium of the logistics liability insurance and then in light of the characteristic of the third party logistics industry, comprehensively considering the influence of the number of claims for compensation, the amount claimed, and total coverage, rearranging the rule of transference and improving the original NCD model. At the end, we used a case analysis to demonstrate the validity of the improved NCD model.%针对中国物流责任保险费率过高、计算依据不科学,从而难以推广应用的现状,在微观层面上提出解决方案-将NCD计费动态模型引入物流责任险费率调整中来,并结合第三方物流行业特点,综合考虑索赔次数、索赔额和投保总额的影响,重新确定转移规则,对原NCD模型进行改进.最后通过案例说明改进后的NCD模型能够使计费依据更加合理,求得的保费更能切实反映第三方物流企业所处的风险水平.

  12. A Dynamic Flexible Partial-Adjustment Model of International Diffusion of the Internet

    Lee, Minkyu; Heshmati, Almas

    2006-01-01

    The paper introduces a dynamic, flexible partial-adjustment model and uses it to analyze the diffusion of Internet connectivity. It specifies and estimates desired levels of Internet diffusion and the speed at which countries achieve the target levels. The target levels and speed of adjustment are both country and time specific. Factors affecting Internet diffusion across countries are identified, and, using nonlinear least squares, the Gompertz growth model is generalized and estimated using...

  13. Statistical Methods to Adjust for Measurement Error in Risk Prediction Models and Observational Studies

    Braun, Danielle

    2013-01-01

    The first part of this dissertation focuses on methods to adjust for measurement error in risk prediction models. In chapter one, we propose a nonparametric adjustment for measurement error in time to event data. Measurement error in time to event data used as a predictor will lead to inaccurate predictions. This arises in the context of self-reported family history, a time to event covariate often measured with error, used in Mendelian risk prediction models. Using validation data, we propos...

  14. Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis

    GENG Rui; CHENG Peng; CUI Deguang

    2009-01-01

    Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.

  15. Simulation of γ spectrum-shifting based on the parameter adjustment of Gaussian function space

    Based on the statistical characteristics of energy spectrum and the features of spectrum-shifting in spectrometry, the parameter adjustment method of Gaussian function space was applied in the simulation of spectrum-shifting. The transient characteristics of energy spectrum were described by the Gaussian function space, and then the Gaussian function space was transferred by parameter adjustment method. Furthermore, the spectrum-shifting in measurement of energy spectrum was simulated. The applied example shows that the parameters can be adjusted flexibly by this method to meet the various requirements in simulation of energy spectrum-shifting. This method was one parameterized simulation method with good performance for the practical application. (authors)

  16. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  17. LQR self-adjusting based control for the planar double inverted pendulum

    Zhang, Jiao-long; Zhang, Wei

    Firstly, the mathematical model of planar double inverted pendulum was established by means of analytical dynamics method. Based on the linear quadratic optimal theory, LQR self-adjusting controller was presented with optimize factor. Further the output of LQR controller is refined through optimize factor which is the function of the states of planar pendulum, and on account of that, control action exerted on the pendulum is improved. Simulation results together with pilot scale experiment verify the efficacy of the suggested scheme. The results show that the controller designed is simple and real-time is good in the lab. Moreover it can ensure fast response, good stability and robustness in the different operating conditions.

  18. Experimental Tuned Mass Damper Based on Eddy Currents Damping Effect and Adjustable Stiffness

    LO FEUDO, Stefania; Allani, Anissa; Cumunel, Gwendal; Argoul, Pierre; Bruno, Domenico

    2015-01-01

    International audience — An experimental Tuned Mass Damper (TMD) is proposed in order to damp vibrations induced by external excitations. This TMD is based on the Eddy Currents damping effect and is designed in such a way as to allow a manually adjustment of its own stiffness and inherent damping. The TMD's modal parameters estimation is therefore carried out by applying the Continuous Wavelet Transform to the signals obtained experimentally. The influence of the manual adjustment of the T...

  19. Confirmation of the Dimensional Adjustment Model of Organizational Structure in Municipal Sports Organizations

    Sofia Nikolaidou

    2015-09-01

    Full Text Available The presence of municipal sport organizations indicates the priority, which is given from the local authority in the well-being of citizens. On the other hand, it constitutes the basis upon which sports are built in national level. The whole body of the organizations has an organizational structure. The organizational structure is a system of registration of employment and the relations that govern them. The basic dimensions are: concentration, complexity and formalization. The purpose of this study is to confirm or contradict the proposed, based on the literature, model of organizational structure in municipal sports organizations. The Sport Commission Organization Structure Survey questionnaire was used in order to conduct it. The participants were 100 Greek municipal sport organizations. Factor analysis detected four factors: departmentalization, concentration, specialization and formalization. The results confirmed partially the proposed model. The ‘Cronbach a’ was used to calculate the reliability factors ranged from .40 to .70. The confirmatory factor analysis was used to determine the adjustment or not to the new model in the data. Based on the model of the confirmatory factor analysis it is revealed that it was slightly acceptable. Finally, although there was a marginal confirmation of the new model, it appears that questions of this survey require further improvement.

  20. Applicability of the cross section adjustment method based on random sampling technique for burnup calculation

    Applicability of the cross section adjustment method based on random sampling (RS) technique to burnup calculations is investigated. The cross section adjustment method is a technique for reduction of prediction uncertainties in reactor core analysis and has been widely applied to fast reactors. As a practical method, the cross section adjustment method based on RS technique is newly developed for application to light water reactors (LWRs). In this method, covariance among cross sections and neutronics parameters are statistically estimated by the RS technique and cross sections are adjusted without calculation of sensitivity coefficients of neutronics parameters, which are necessary in the conventional cross section adjustment method. Since sensitivity coefficients are not used, the RS-based method is expected to be practically applied to LWR core analysis, in which considerable computational costs are required for estimation of sensitivity coefficients. Through a simple pin-cell burnup calculation, applicability of the present method to burnup calculations is investigated. The calculation results indicate that the present method can adequately adjust cross sections including burnup characteristics. (author)

  1. Experimental Study on Well Pattern Adjustment using Large-Scale Natural Sandstone Flat Model with Ultra-Low Permeability

    Tian Wenbo; Xu Xuan; Yang Zhengming; Xiao Qianhua; Zhang Yapu

    2013-01-01

    Aimed at ultra-low permeability reservoirs, the recovery effect of inverted nine-spot equilateral well pattern is studied through large-scale natural sandstone flat model experiments. Two adjustment schemes were proposed based on the original well pattern. This essay has put forward the concept of pressure sweep efficiency for evaluating the driving efficiency. Pressure gradient fields under different drawdown pressure were measured. Seepage area of the model was divided into immobilized area...

  2. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  3. A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment

    Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon

    2016-05-01

    We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.

  4. Adjustment model of thermoluminescence experimental data; Modelo de ajuste de datos experimentales de termoluminiscencia

    Moreno y Moreno, A. [Departamento de Apoyo en Ciencias Aplicadas, Benemerita Universidad Autonoma de Puebla, 4 Sur 104, Centro Historico, 72000 Puebla (Mexico); Moreno B, A. [Facultad de Ciencias Quimicas, UNAM, 04510 Mexico D.F. (Mexico)

    2002-07-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a{sub i}* exp (-1/b{sub i} * (T-C{sub i})) where: a{sub i}, b{sub i}, c{sub i} are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  5. Designing a model to improve first year student adjustment to university

    Nasrin Nikfal Azar; Hamideh Reshadatjoo

    2014-01-01

    The increase in the number of universities for the last decade in Iran increases the need for higher education institutions to manage their enrollment, more effectively. The purpose of this study is to design a model to improve the first year university student adjustment by examining the effects of academic self-efficacy, academic motivation, satisfaction, high school GPA and demographic variables on student’s adjustment to university. The study selects a sample of 357 students out of 4585 b...

  6. An Adjusted profile likelihood for non-stationary panel data models with fixed effects

    Dhaene, Geert; Jochmans, Koen

    2011-01-01

    We calculate the bias of the profile score for the autoregressive parameters p and covariate slopes in the linear model for N x T panel data with p lags of the dependent variable, exogenous covariates, fixed effects, and unrestricted initial observations. The bias is a vector of multivariate polynomials in p with coefficients that depend only on T. We center the profile score and, on integration, obtain an adjusted profile likelihood. When p = 1, the adjusted profile likelihood coincides wi...

  7. A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials

    Pierre Chaussé

    2016-04-01

    Full Text Available Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL family, specifically on the empirical likelihood (EL method, the exponential tilting (ET method, and the continuous updated estimator (CUE method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present together with the CUE-based covariate adjustment method.

  8. A Model of Divorce Adjustment for Use in Family Service Agencies.

    Faust, Ruth Griffith

    1987-01-01

    Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…

  9. Community Influences on Adjustment in First Grade: An Examination of an Integrated Process Model

    Caughy, Margaret O'Brien; Nettles, Saundra M.; O'Campo, Patricia J.

    2007-01-01

    We examined the impact of neighborhood characteristics both directly and indirectly as mediated by parent coaching and the parent/child affective relationship on behavioral and school adjustment in a sample of urban dwelling first graders. We used structural equations modeling to assess model fit and estimate direct, indirect, and total effects of…

  10. School Adjustment in the Early Grades: Toward an Integrated Model of Neighborhood, Parental, and Child Processes

    Nettles, Saundra Murray; Caughy, Margaret O'Brien; O'Campo, Patricia J.

    2008-01-01

    Examining recent research on neighborhood influences on child development, this review focuses on social influences on school adjustment in the early elementary years. A model to guide community research and intervention is presented. The components of the model of integrated processes are neighborhoods and their effects on academic outcomes and…

  11. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  12. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  13. Modeling and Predicting the EUR/USD Exchange Rate: The Role of Nonlinear Adjustments to Purchasing Power Parity

    Jesús Crespo Cuaresma; Anna Orthofer

    2010-01-01

    Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...

  14. Risk-based surveillance: Estimating the effect of unwarranted confounder adjustment

    Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo

    2011-01-01

    We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differenc...

  15. LC Filter Design for Wide Band Gap Device Based Adjustable Speed Drives

    Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede

    This paper presents a simple design procedure for LC filters used in wide band gap device based adjustable speed drives. Wide band gap devices offer fast turn-on and turn-off times, thus producing high dV/dt into the motor terminals. The high dV/dt can be harmful for the motor windings and bearings...

  16. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    H. Lee

    2012-01-01

    Full Text Available State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA and kinematic-wave routing models of the US National Weather Service (NWS Research Distributed Hydrologic Model (RDHM with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation

  17. Understanding property market dynamics: insights from modelling the supply-side adjustment mechanism

    Nanda Nanthakumaran; Craig Watkins; Allison Orr

    2000-01-01

    The volatility of commercial property markets in the United Kingdomhas stimulated the development of explanatory models of 'price' determination. These models have tended to focus on the demand-side as the driver of change. A corollary of this is that, despite the fact that construction lags are known to exacerbate cyclical fluctuations, the supply-side adjustment mechanism has been subject to relatively little research effort. In this paper the authors develop a new model of commercial prope...

  18. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed. PMID:26251100

  19. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  20. Comparing a thermo-mechanical Weichselian Ice Sheet reconstruction to reconstructions based on the sea level equation: aspects of ice configurations and glacial isostatic adjustment

    Schmidt, P.; Lund, B; Näslund, J-O.; Fastook, J.

    2014-01-01

    In this study we compare a recent reconstruction of the Weichselian Ice Sheet as simulated by the University of Maine ice sheet model (UMISM) to two reconstructions commonly used in glacial isostatic adjustment (GIA) modelling: ICE-5G and ANU (Australian National University, also known as RSES). The UMISM reconstruction is carried out on a regional scale based on thermo-mechanical modelling, whereas ANU and ICE-5G are global models based on the sea level equation. The three ...

  1. POSITIONING BASED ON INTEGRATION OF MUTI-SENSOR SYSTEMS USING KALMAN FILTER AND LEAST SQUARE ADJUSTMENT

    M. Omidalizarandi

    2013-09-01

    Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.

  2. Rational Multi-curve Models with Counterparty-risk Valuation Adjustments

    Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai;

    2016-01-01

    with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to......We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market data...... comply with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....

  3. Improving the global applicability of the RUSLE model - adjustment of the topographical and rainfall erosivity factors

    Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.

    2015-09-01

    Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the

  4. Entering the Chinese e-merging market : A single case study of buisness model adjustment

    Byhlin, Hanna; Holm, Emma

    2012-01-01

    Business model is a concept that has gained increasing attention in recent years. It is seen as a firm’s ticket to success and has been studied by researchers and managers alike to find the ultimate template for prosperity. Little research has, however, been conducted on the necessary adjustments of a business model in the case of new market entry. Globalization has inspired companies to grow internationally, and firms increasingly look for new markets to capture, and China has become one of ...

  5. Estimation of emission adjustments from the application of four-dimensional data assimilation to photochemical air quality modeling

    Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analysis, develops spatially and temporally varying concentration and sensitivity fields that account for chemical and physical processing, and receptor analysis is used to adjust source strengths. Proposed changes to domain-wide NOx, volatile organic compounds (VOCs) and CO emissions from anthropogenic sources and for VOC emissions from biogenic sources were estimated, as well as modifications to sources based on their spatial location (urban vs. rural areas). In general, domain-wide anthropogenic VOC emissions were increased approximately two times their base case level to best match observations, domain-wide anthropogenic NOx and biogenic VOC emissions (BEIS2 estimates) remained close to their base case value and domain-wide CO emissions were decreased. Adjustments for anthropogenic NOx emissions increased their level of uncertainty when adjustments were computed for mobile and area sources (or urban and rural sources) separately, due in part to the poor spatial resolution of the observation field of nitrogen-containing species. Estimated changes to CO emissions also suffer from poor spatial resolution of the measurements. Results suggest that rural anthropogenic VOC emissions appear to be severely underpredicted. The FDDA approach was also used to investigate the speciation profiles of VOC emissions, and results warrant revision of these profiles. In general, the results obtained here are consistent with what are viewed as the current deficiencies in emissions inventories as derived by other top-down techniques, such as tunnel studies and analysis of ambient measurements. (Author)

  6. Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model

    Stelian Stancu

    2008-06-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.

  7. NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia

    Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas

    2016-04-01

    Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.

  8. Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia

    van der Wal, Wouter; Barnhoorn, Auke; Stocchi, Paolo; Gradmann, Sofie; Wu, Patrick; Drury, Martyn; Vermeersen, Bert

    2013-07-01

    Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes a single deformation mechanism for mantle rocks. Such a simplified viscosity profile makes it hard to compare the inferred mantle rheology to inferences from seismology and laboratory experiments. It is unknown what constraints GIA observations can provide on more realistic mantle rheology with an ice history that is not based on an a priori mantle viscosity profile. This paper investigates a model for GIA with a new ice history for Fennoscandia that is constrained by palaeoclimate proxies and glacial sediments. Diffusion and dislocation creep flow law data are taken from a compilation of laboratory measurements on olivine. Upper-mantle temperature data sets down to 400 km depth are derived from surface heatflow measurements, a petrochemical model for Fennoscandia and seismic velocity anomalies. Creep parameters below 400 km are taken from an earlier study and are only varying with depth. The olivine grain size and water content (a wet state, or a dry state) are used as free parameters. The solid Earth response is computed with a global spherical 3-D finite-element model for an incompressible, self-gravitating Earth. We compare predictions to sea level data and GPS uplift rates in Fennoscandia. The objective is to see if the mantle rheology and the ice model is consistent with GIA observations. We also test if the inclusion of dislocation creep gives any improvements over predictions with diffusion creep only, and whether the laterally varying temperatures result in an improved fit compared to a widely used 1-D viscosity profile (VM2). We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments

  9. Glacial isostatic adjustment in Fennoscandia from GRACE data and comparison with geodynamical models

    Steffen, Holger; Denker, Heiner; Müller, Jürgen

    2008-10-01

    The Earth's gravity field observed by the Gravity Recovery and Climate Experiment (GRACE) satellite mission shows variations due to the integral effect of mass variations in the atmosphere, hydrosphere and geosphere. Several institutions, such as the GeoForschungsZentrum (GFZ) Potsdam, the University of Texas at Austin, Center for Space Research (CSR) and the Jet Propulsion Laboratory (JPL), Pasadena, provide GRACE monthly solutions, which differ slightly due to the application of different reduction models and centre-specific processing schemes. The GRACE data are used to investigate the mass variations in Fennoscandia, an area which is strongly influenced by glacial isostatic adjustment (GIA). Hence the focus is set on the computation of secular trends. Different filters (e.g. isotropic and non-isotropic filters) are discussed for the removal of high frequency noise to permit the extraction of the GIA signal. The resulting GRACE based mass variations are compared to global hydrology models (WGHM, LaDWorld) in order to (a) separate possible hydrological signals and (b) validate the hydrology models with regard to long period and secular components. In addition, a pattern matching algorithm is applied to localise the uplift centre, and finally the GRACE signal is compared with the results from a geodynamical modelling. The GRACE data clearly show temporal gravity variations in Fennoscandia. The secular variations are in good agreement with former studies and other independent data. The uplift centre is located over the Bothnian Bay, and the whole uplift area comprises the Scandinavian Peninsula and Finland. The secular variations derived from the GFZ, CSR and JPL monthly solutions differ up to 20%, which is not statistically significant, and the largest signal of about 1.2 μGal/year is obtained from the GFZ solution. Besides the GIA signal, two peaks with positive trend values of about 0.8 μGal/year exist in central eastern Europe, which are not GIA-induced, and

  10. Adjustable grazing incidence x-ray optics based on thin PZT films

    Cotroneo, Vincenzo; Davis, William N.; Marquez, Vanessa; Reid, Paul B.; Schwartz, Daniel A.; Johnson-Wilke, Raegan L.; Trolier-McKinstry, Susan E.; Wilke, Rudeger H. T.

    2012-10-01

    The direct deposition of piezoelectric thin films on thin substrates offers an appealing technology for the realization of lightweight adjustable mirrors capable of sub-arcsecond resolution. This solution will make it possible to realize X-ray telescopes with both large effective area and exceptional angular resolution and, in particular, it will enable the realization of the adjustable optics for the proposed mission Square Meter Arcsecond Resolution X-ray Telescope (SMART-X). In the past years we demonstrated for the first time the possibility of depositing a working piezoelectric thin film (1-5 um) made of lead-zirconate-titanate (PZT) on glass. Here we review the recent progress in film deposition and influence function characterization and comparison with finite element models. The suitability of the deposited films is analyzed and some constrains on the piezoelectric film performances are derived. The future steps in the development of the technology are described.

  11. Improving depth resolution of diffuse optical tomography with an exponential adjustment method based on maximum singular value of layered sensitivity

    Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang

    2008-01-01

    The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.

  12. Plasmonic-multimode-interference-based logic circuit with simple phase adjustment

    Ota, Masashi; Sumimura, Asahi; Fukuhara, Masashi; Ishii, Yuya; Fukuda, Mitsuo

    2016-04-01

    All-optical logic circuits using surface plasmon polaritons have a potential for high-speed information processing with high-density integration beyond the diffraction limit of propagating light. However, a number of logic gates that can be cascaded is limited by complicated signal phase adjustment. In this study, we demonstrate a half-adder operation with simple phase adjustment using plasmonic multimode interference (MMI) devices, composed of dielectric stripes on a metal film, which can be fabricated by a complementary metal-oxide semiconductor (MOS)-compatible process. Also, simultaneous operations of XOR and AND gates are substantiated experimentally by combining 1 × 1 MMI based phase adjusters and 2 × 2 MMI based intensity modulators. An experimental on-off ratio of at least 4.3 dB is confirmed using scanning near-field optical microscopy. The proposed structure will contribute to high-density plasmonic circuits, fabricated by complementary MOS-compatible process or printing techniques.

  13. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    B.E. Carvajal-Gámez

    2012-08-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  14. Missing Aggregate Dynamics: On the Slow Convergence of Lumpy Adjustment Models

    Caballero, Ricardo J.; Eduardo M.R.A. Engel

    2003-01-01

    The dynamic response of aggregate variables to shocks is one of the central concerns of applied macroeconomics. The main measurement procedure for these dynamics consists of estimmiating an ARMA or VAR (VARs, for short). In non- or semi-structural approaches, the characterization of dynamics stops there. In other, more structural approaches, researcher try to uncover underlying adjustment cost parameters from the estimated VARs. Yet, in others, such as in RBC models, these estimates are used ...

  15. How Buyers Evaluate Product Bundles: A Model of Anchoring and Adjustment.

    Yadav, Manjit S

    1994-01-01

    Bundling, the joint offering of two or more items, is a common selling strategy, yet little research has been conducted on buyers' evaluation of bundle offers. We developed and tested a model of bundle evaluation in which the buyers anchored their evaluation on the item perceived as most important and then made adjustments on the basis of their evaluations of the remaining bundle items. The results of two computerized laboratory experiments suggested that people tend to examine bundle items i...

  16. Cost of capital adjusted for governance risk through a multiplicative model of expected returns

    Apreda, Rodolfo

    2008-01-01

    This paper sets forth another contribution to the long standing debate over cost of capital, firstly by introducing a multiplicative model that translates the inner structure of the weighted average cost of capital rate and, secondly, adjusting such rate for governance risk. The conventional wisdom states that the cost of capital may be figured out by means of a weighted average of debt and capital. But this is a linear approximation only, which may bring about miscalculations, whereas the mu...

  17. Solution model of nonlinear integral adjustment including different kinds of observing data with different precisions

    郭金运; 陶华学

    2003-01-01

    In order to process different kinds of observing data with different precisions, a new solution model of nonlinear dynamic integral least squares adjustment was put forward, which is not dependent on their derivatives. The partial derivative of each component in the target function is not computed while iteratively solving the problem. Especially when the nonlinear target function is more complex and very difficult to solve the problem, the method can greatly reduce the computing load.

  18. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Carvajal-Gamez

    2012-09-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  19. Stress and Personal Resource as Predictors of the Adjustment of Parents to Autistic Children: A Multivariate Model

    Siman-Tov, Ayelet; Kaniel, Shlomo

    2011-01-01

    The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…

  20. Optimal fuzzy PID controller with adjustable factors based on flexible polyhedron search algorithm

    谭冠政; 肖宏峰; 王越超

    2002-01-01

    A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on-line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors xp, xi, and xd are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.

  1. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational

  2. Optical design of the focal adjustable flashlight based on a power white-LED

    Cai, Jhih-You; Lo, Yi-Chien; Sun, Ching-Cherng

    2011-10-01

    In the paper, we design a focal adjustable flashlight, which can provide the spotlight and the wide-angle illumination in different modes. For most users, they two request two illumination modes. In such two modes, one is high density energy of the light pattern and the other is the uniform light pattern in a wide view field. In designing the focal adjustable flashlight, we first build a precise optical model for the high-power LED produced by CREE Inc. in mid-field verification to make sure the accuracy of our simulation. Typically, the lens is useful to be the key component of the adjustable flashlight, but the optical efficiency is low. Here, we introduce a concept of so-called total internal refraction (TIR) lens into the design of flashlight. By defocusing the TIR lens, the flashlight can quickly change the beam size and energy density to various applications. We design two segments of the side of the TIR lens so that they can be applied to the two modes, and the flashlight provides a high optical efficiency for each mode. The illuminance of the center of light pattern at a distance of 2 m from the lamp is also higher than using the lens in the spotlight and wide-angle illumination. It provides good lighting functions for users.

  3. Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.

    Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao

    2016-05-18

    To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory. PMID:27117814

  4. Adjustment method of deterministic control rods worth computation based on measurements and auxiliary Monte Carlo runs

    Highlights: • 3-group cross sections is collapsed by WIMS and SN2. Core is calculated by CITATION. • Engineering adjustments are made to generate better few group cross-sections. • Validation is made by JRR-3M measurements and Monte Carlo simulation. - Abstract: The control rods (CRs) worth is key parameter for the research reactors (RRs) operation and utilization. Control rods worth computation is a challenge for the full deterministic calculation methodology, including the few group cross section generation, and the core analysis. The purpose of this work is to interpret our codes system, and their applicability of obtaining reliable CRs worth by some engineering adjustments. Cross sections collapsing in three energy groups is made by WIMS and SN2 codes, while the core analysis is performed by CITATION. We use these codes for the design, construction, and operation of our research reactor CMRR (China Mianyang Research Reactor). However, due to the intrinsic deficiency of the diffusion theory and homogenizing approximation, the directly obtained results, such as CRs worth and neutron flux distributions are not satisfactory. So two points of simple adjustments are made to generate the few group cross-sections with the assistance of measurements and auxiliary Monte Carlo runs. The first step is to adjust the fuel cross sections by changing properly the mass of a non-fissile material, such as the mass of the two 0.4 mm Cd wires existing at both sides of each uranium plate, so that the core model of CITATION can get good eigenvalue when all CRs are completely extracted. The second step is to revise the shim absorber cross section of CRs by adjusting the hafnium mass, so that the CITATION model can get correct critical rods position. In this manuscript, the JRR-3M (Japan Research Reactor No. 3 Modified) reactor is employed as a demonstration. Final revised results are validated with the stochastic simulation and experimental measurement values, including the

  5. Analysis of substitution experiments in ZED-2 with physically realistic model adjustments

    Substitution experiments involve several types of reactor simulation. When an experiment on a power reactor is impracticable, such as a loss-of-coolant accident, a simulation of its lattice must be set up in a lattice-testing reactor, such as ZED-2. A full core of such a test lattice may not go critical, because of the size limitation, and/or may be expensive. A substitution experiment simulates such a full-core, by setting up a few channels of the experimental lattice, surrounded by a 'driving' lattice, to make a critical assembly. A corresponding 'reference' experiment, with a pure driver lattice, permits the characteristics of the experimental lattice to be inferred by comparison of the two experiments. This inference requires mathematical modelling of the experiments. Measurements of the flux distributions should enable refinement of the model. However, previous analyses have required that the model of outer parts of the reactor, such as the graphite reflector, be replaced by arbitrary extrapolation lengths, so that these can be varied to correspondingly adjust the calculated fluxes. This arbitrary replacement may lose more accuracy than the adjustment of the model gains. The FITEXPTS family of substitution experiment simulation programs permits the adjustment to consist instead of variations of the modelling of small, unknown details of the experiment, the best choice of which depends on the experiment. Examples are: the flux depression inside the support structures in the bottom ends of the channels; the effective thicknesses of the irregular graphite reflectors; the reactivity of ring of 'booster rods', which are sometimes necessary around the periphery of the driver lattice; and the extrapolation length used of necessity at the unreflected top of the core. This flexibility leads to improved accuracy. The paper expands on techniques and testing. (author). 8 refs., 5 figs

  6. A spatial model of bird abundance as adjusted for detection probability

    Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.

    2009-01-01

    Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.

  7. 基于模型预测调节的西安某商场冰蓄冷空调%Model-based Predictive Adjust of Shopping Mall lce Storage Central Air Conditioning

    郭凯

    2015-01-01

    针对冰蓄冷空调的特点及工作原理,结合西安地区夏季气候特征,基于模型预测调节,提出了西安赛格购物中心夏季供冷依靠分时电价的优化方案。通过TRNSYS瞬态能耗模拟软件对赛格某层的建筑细节模拟,利用系统辨识技术将TRNSYS的数据进行处理,从而建立简化的线性热工模型,明确各因素对室内温度的影响程度并计算出室内冷量需求,最后基于线性目标规划,对赛格一天冷量的使用进行优化,达到了在节能与节省用电费用方面的明显的效果。%This paper presents the optimization scheme of cooling capacity depend on time-of-use electricity for SAGA shopping mal in Xi'an.By applying system identification techniques,a simplified linear thermal model for the building was de-rived from a detailed building simulation previously developed in TRNSYS.Then clarify the influence degree of various factors on the indoor temperature and calculate the indoor cooling power requirements.Taking advantage oflinear goal programming to optimize the whole day's cooling power for SAGA shopping mal final y.

  8. A novel wavelength-adjusting method in InGaN-based light-emitting diodes

    Deng, Zhen; Jiang, Yang; Ma, Ziguang; Wang, Wenxin; Jia, Haiqiang; Zhou, Junming; Chen, Hong

    2013-01-01

    The pursuit of high internal quantum efficiency (IQE) for green emission spectral regime is referred as “green gap” challenge. Now researchers place their hope on the InGaN-based materials to develop high-brightness green light-emitting diodes. However, IQE drops fast when emission wavelength of InGaN LED increases by changing growth temperature or well thickness. In this paper, a new wavelength-adjusting method is proposed and the optical properties of LED are investigated. By additional pro...

  9. Comparison of Satellite-based Basal and Adjusted Evapotranspiration for Several California Crops

    Johnson, L.; Lund, C.; Melton, F. S.

    2013-12-01

    There is a continuing need to develop new sources of information on agricultural crop water consumption in the arid Western U.S. Pursuant to the California Water Conservation Act of 2009, for instance, the stakeholder community has developed a set of quantitative indicators involving measurement of evapotranspiration (ET) or crop consumptive use (Calif. Dept. Water Resources, 2012). Fraction of reference ET (or, crop coefficients) can be estimated from a biophysical description of the crop canopy involving green fractional cover (Fc) and height as per the FAO-56 practice standard of Allen et al. (1998). The current study involved 19 fields in California's San Joaquin Valley and Central Coast during 2011-12, growing a variety of specialty and commodity crops: lettuce, raisin, tomato, almond, melon, winegrape, garlic, peach, orange, cotton, corn and wheat. Most crops were on surface or subsurface drip, though micro-jet, sprinkler and flood were represented as well. Fc was retrospectively estimated every 8-16 days by optical satellite data and interpolated to a daily timestep. Crop height was derived as a capped linear function of Fc using published guideline maxima. These variables were used to generate daily basal crop coefficients (Kcb) per field through most or all of each respective growth cycle by the density coefficient approach of Allen & Pereira (2009). A soil water balance model for both topsoil and root zone, based on FAO-56 and using on-site measurements of applied irrigation and precipitation, was used to develop daily soil evaporation and crop water stress coefficients (Ke, Ks). Key meteorological variables (wind speed, relative humidity) were extracted from the California Irrigation Management Information System (CIMIS) for climate correction. Basal crop ET (ETcb) was then derived from Kcb using CIMIS reference ET. Adjusted crop ET (ETc_adj) was estimated by the dual coefficient approach involving Kcb, Ke, and incorporating Ks. Cumulative ETc

  10. 基于投入产出价格影响模型的水价调整影响%Impact of water price adjustment based on input-output price model

    倪红珍; 王浩; 赵博; 马伟

    2013-01-01

    Using the input-output price model, Beijing as an example, calculated the effect of water price's single and integrated fluctuation on price of goods or services and water fee rate in other economic sectors. Which can helps to make effective price policies to relieve stress of water supply. The results show, assuming all kinds of water price independent, unaffected by other departments, the price impact is weak caused by water price increasing; it is more obvious impact on resident, administrative and public accounting, and partial high duty of water services from various water price changes, the impact on education is the biggest. The most obvious influence to water fee rate comes from the industry and commerce water price fluctuation. The results also show, water fee rate of recycled water industry increase more obviously than other sectors when sewage treatment industry price increasing. If all kinds of water price doubled, the proportion which is the total water fee to water consumption expenditure is still less than 0.5% for non-watersupply sectors, except for resident which proportion is slightly higher than 1%. The results tell us, if the water consumption of each department keep unchanged, the water price at least might has 3 times rising space, as well as the water fee rate change to 2% which is minimum standard bearing capacity of the users; and water price increasing will not produce large impact in economic society. Vigorously for water saving, the water price reform is necessary and exigent measures.%运用国民经济投入产出的价格影响模型,以北京市为例,分析水价单独变化与联动变化对其它经济部门产品或服务价格和水费率的影响,为制定有效缓解水资源供求矛盾的水价政策提供定量分析依据.计算结果显示:假设各类供水部门价格相互独立、不受其它部门价格的影响,各类水价提高对非供水部门的影响较弱,主要对居民、行政事业及部分高用水服

  11. Adjustable grazing incidence x-ray optics: measurement of actuator influence functions and comparison with modeling

    Cotroneo, Vincenzo; Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.; Trolier-McKinstry, Susan; Wilke, Rudeger H. T.

    2011-09-01

    The present generation of X-ray telescopes emphasizes either high image quality (e.g. Chandra with sub-arc second resolution) or large effective area (e.g. XMM-Newton), while future observatories under consideration (e.g. Athena, AXSIO) aim to greatly enhance the effective area, while maintaining moderate (~10 arc-seconds) image quality. To go beyond the limits of present and planned missions, the use of thin adjustable optics for the control of low-order figure error is needed to obtain the high image quality of precisely figured mirrors along with the large effective area of thin mirrors. The adjustable mirror prototypes under study at Smithsonian Astrophysical Observatory are based on two different principles and designs: 1) thin film lead-zirconate-titanate (PZT) piezoelectric actuators directly deposited on the mirror back surface, with the strain direction parallel to the glass surface (for sub-arc-second angular resolution and large effective area), and 2) conventional leadmagnesium- niobate (PMN) electrostrictive actuators with their strain direction perpendicular to the mirror surface (for 3-5 arc second resolution and moderate effective area). We have built and operated flat test mirrors of these adjustable optics. We present the comparison between theoretical influence functions as obtained by finite element analysis and the measured influence functions obtained from the two test configurations.

  12. Inventory models - shortcomings, the necessary adjustments and improvements for real use of models in companies and supply chains

    Červenka, Daniel

    2010-01-01

    The aim of this thesis is to find the appropriate manner for inventory control of a small e-shop. The greatest emphasis is placed on the nonseasonal goods. Any model which respect all needs of the shop was not found. From a series of models the stochastic model with continuous demand was chosen as the most applicable. Adjustment of the cost function, change the delivery time from constant to fluid, determination of optimal inventory level and other modifications brought the model more in real...

  13. Multivariate Risk Adjustment of Primary Care Patient Panels in a Public Health Setting: A Comparison of Statistical Models.

    Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani

    2016-01-01

    We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054

  14. Experimental Study on Well Pattern Adjustment using Large-Scale Natural Sandstone Flat Model with Ultra-Low Permeability

    Tian Wenbo

    2013-05-01

    Full Text Available Aimed at ultra-low permeability reservoirs, the recovery effect of inverted nine-spot equilateral well pattern is studied through large-scale natural sandstone flat model experiments. Two adjustment schemes were proposed based on the original well pattern. This essay has put forward the concept of pressure sweep efficiency for evaluating the driving efficiency. Pressure gradient fields under different drawdown pressure were measured. Seepage area of the model was divided into immobilized area, nonlinear seepage area and quasi-linear seepage area combining with the nonlinear seepage experiment of the small twin core. The results showed that the ultra-low permeability sandstone flat model was characterized as nonlinear seepage law and threshold pressure gradient obviously. For one quarter of the inverted nine-spot equilateral well pattern, the middle region is difficult to develop. The recovery effect can be improved by adjusting production wells or adding injection wells. And the best solution is transforming the corner production well into injection well.

  15. Adjustable low frequency and broadband metamaterial absorber based on magnetic rubber plate and cross resonator

    Cheng, Yongzhi; Nie, Yan; Wang, Xian; Gong, Rongzhou

    2014-02-01

    In this paper, the magnetic rubber plate absorber (MRPA) and metamaterial absorber (MA) based on MRP substrate were proposed and studied numerically and experimentally. Based on the characteristic of L-C resonances, experimental results show that the MA composed of cross resonator (CR) embedded single layer MRP could be adjustable easily by changing the wire length and width of CR structure and MRP thickness. Finally, experimental results show that the MA composed of CR-embedded two layers MRP with the total thickness of 2.42 mm exhibit a -10 dB absorption bandwidth from 1.65 GHz to 3.7 GHz, which is 1.86 times wider than the same thickness MRPA.

  16. New Strategy for Congestion Control based on Dynamic Adjustment of Congestion Window

    Gamal Attiya

    2012-03-01

    Full Text Available This paper presents a new mechanism for the end-to-end congestion control, called EnewReno. The proposed mechanism is based on the enhancement of both the congestion avoidance and the fast recovery algorithms of the TCP NewReno so as to improve its performance. The basic idea of the proposed mechanism is to adjust the congestion window of the TCP sender dynamically based on the level of congestion in the network so as to allow transferring more packets to the destination. The performance of the proposed mechanism is evaluated and compared with the most recent mechanisms by simulation studies using the well known Network Simulator NS-2 and the realistic topology generator GT-ITM.

  17. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  18. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  19. Trans-Reflective Color Filters Based on a Phase Compensated Etalon Enabling Adjustable Color Saturation.

    Park, Chul-Soon; Shrestha, Vivek Raj; Lee, Sang-Shin; Choi, Duk-Yong

    2016-01-01

    Trans-reflective color filters, which take advantage of a phase compensated etalon (silver-titania-silver-titania) based nano-resonator, have been demonstrated to feature a variable spectral bandwidth at a constant resonant wavelength. Such adjustment of the bandwidth is presumed to translate into flexible control of the color saturation for the transmissive and reflective output colors produced by the filters. The thickness of the metallic mirror is primarily altered to tailor the bandwidth, which however entails a phase shift associated with the etalon. As a result, the resonant wavelength is inevitably displaced. In order to mitigate this issue, we attempted to compensate for the induced phase shift by introducing a dielectric functional layer on top of the etalon. The phase compensation mediated by the functional layer was meticulously investigated in terms of the thickness of the metallic mirror, from the perspective of the resonance condition. The proposed color filters were capable of providing additive colors of blue, green, and red for the transmission mode while exhibiting subtractive colors of yellow, magenta, and cyan for the reflection mode. The corresponding color saturation was estimated to be efficiently adjusted both in transmission and reflection. PMID:27150979

  20. Trans-Reflective Color Filters Based on a Phase Compensated Etalon Enabling Adjustable Color Saturation

    Park, Chul-Soon; Shrestha, Vivek Raj; Lee, Sang-Shin; Choi, Duk-Yong

    2016-05-01

    Trans-reflective color filters, which take advantage of a phase compensated etalon (silver-titania-silver-titania) based nano-resonator, have been demonstrated to feature a variable spectral bandwidth at a constant resonant wavelength. Such adjustment of the bandwidth is presumed to translate into flexible control of the color saturation for the transmissive and reflective output colors produced by the filters. The thickness of the metallic mirror is primarily altered to tailor the bandwidth, which however entails a phase shift associated with the etalon. As a result, the resonant wavelength is inevitably displaced. In order to mitigate this issue, we attempted to compensate for the induced phase shift by introducing a dielectric functional layer on top of the etalon. The phase compensation mediated by the functional layer was meticulously investigated in terms of the thickness of the metallic mirror, from the perspective of the resonance condition. The proposed color filters were capable of providing additive colors of blue, green, and red for the transmission mode while exhibiting subtractive colors of yellow, magenta, and cyan for the reflection mode. The corresponding color saturation was estimated to be efficiently adjusted both in transmission and reflection.

  1. Improvement for Speech Signal based on Post Wiener Filter and Adjustable Beam-Former

    Xiaorong Tong

    2013-06-01

    Full Text Available In this study, a two-stage filter structure is introduced for speech enhancement. The first stage is an adjustable filter and sum beam-former with four-microphone array. The control of beam-forming filter is realized by adjusting only a single control variable. Different from the adaptive beam-forming filter, the proposed filter structure does not bring to any adaptive error noise, thus, it also does not bring the trouble to the second stage of the speech signal processing. The second stage of the proposed filter is a Wiener filter. The estimation of signal’s power spectrum for Wiener filter is realized by cross-correlation between primary outputs of two adjacent directional beams. This estimation is based on the assumption that the noise outputs of the two adjacent directional beams come from two independent noise source but the speech outputs come from the same speech source. The simulation results shown that the proposed algorithm can improve the Signal-Noise-Ratio (SNR about 6 dB.

  2. Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model

    Ion Gh. Rosca

    2007-08-01

    Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.

  3. Generic Role of Polymer Supports in the Fine Adjustment of Interfacial Interactions between Solid Substrates and Model Cell Membranes.

    Rossetti, Fernanda F; Schneck, Emanuel; Fragneto, Giovanna; Konovalov, Oleg V; Tanaka, Motomu

    2015-04-21

    To understand the generic role of soft, hydrated biopolymers in adjusting interfacial interactions at biological interfaces, we designed a defined model of the cell-extracellular matrix contacts based on planar lipid membranes deposited on polymer supports (polymer-supported membranes). Highly uniform polymer supports made out of regenerated cellulose allow for the control of film thickness without changing the surface roughness and without osmotic dehydration. The complementary combination of specular neutron reflectivity and high-energy specular X-ray reflectivity yields the equilibrium membrane-substrate distances, which can quantitatively be modeled by computing the interplay of van der Waals interaction, hydration repulsion, and repulsion caused by the thermal undulation of membranes. The obtained results help to understand the role of a biopolymer in the interfacial interactions of cell membranes from a physical point of view and also open a large potential to generally bridge soft, biological matter and hard inorganic materials. PMID:25794040

  4. Performance analysis of adjustable window based FIR filter for noisy ECG Signal Filtering

    N. Mahawar

    2013-09-01

    Full Text Available Recording of the electrical activity associated to heart functioning is known as Electrocardiogram (ECG. ECG is a quasi-periodical, rhythmically signal synchronized by the function of the heart, which acts as a generator of bioelectric events. ECG signals are low level signals and sensitive to external contaminations. Electrocardiogram signals are often corrupted by noise which may have electrical or electrophysiological origin. The noise signal tends to alter the signal morphology, thereby hindering the correct diagnosis. In order to remove the unwanted noise, a digital filtering technique based on adjustable windows is proposed in this paper. Finite Impulse Response (FIR low pass is designed using windowing method for the ECG signal. The results obtained from different techniques are compared on the basis of popularly used signal error measures like SNR, PRD, PRD1, and MSE.

  5. Development of the adjusted nuclear cross-section library based on JENDL-3.2 for large FBR

    JNC (and PNC) had developed the adjusted nuclear cross-section library in which the results of the JUPITER experiments were reflected. Using this adjusted library, the distinct improvement of the accuracy in nuclear design of FBR cores had been achieved. As a recent research, JNC develops a database of other integral data in addition to the JUPITER experiments, aiming at further improvement for accuracy and reliability. In 1991, the adjusted library based on JENDL-2, JFS-3-J2 (ADJ91R), was developed, and it has been used on the design research for FBR. As an evaluated nuclear library, however, JENDL-3.2 is recently used. Therefore, the authors developed an adjusted library based on JENDL-3.2 which is called JFS-3-J3.2(ADJ98). It is known that the adjusted library based on JENDL-2 overestimated the sodium void reactivity worth by 10-20%. It is expected that the adjusted library based on JENDL-3.2 solve the problem. The adjusted library JFS-3-J3.2(ADJ98) was produced with the same method as the adjusted library JFS-3-J2(ADJ91R) and used more integral parameters of JUPITER experiments than the adjusted library JFS-3-J2(ADJ91R). This report also describes the design accuracy estimation on a 600 MWe class FBR with the adjusted library JFS-3-J3.2(ADJ98). Its main nuclear design parameters (multiplication factor, burn-up reactivity loss, breeding ratio, etc.) except the sodium void reactivity worth which are calculated with the adjusted library JFS-3-J3.2(ADJ98) are almost the same as those predicted with JFS-3-J2(ADJ91R). As for the sodium void reactivity, the adjusted library JFS-3-J3.2(ADJ98) estimates about 4% smaller than the JFS-3-J2(ADJ91R) because of the change of the basic nuclear library from JENDL-2 to JENDL-3.2. (author)

  6. A novel micro-accelerometer with adjustable sensitivity based on resonant tunnelling diodes

    Resonant tunnelling diodes (RTDs) have negative differential resistance effect, and the current-voltage characteristics change as a function of external stress, which is regarded as meso-piezoresistance effect of RTDs. In this paper, a novel micro-accelerometer based on AlAs/GaAs/In0.1Ga0.9As/GaAs/AlAs RTDs is designed and fabricated to be a four-beam-mass structure, and an RTD-Wheatstone bridge measurement system is established to test the basic properties of this novel accelerometer. According to the experimental results, the sensitivity of the RTD based micro-accelerometer is adjustable within a range of 3 orders when the bias voltage of the sensor changes. The largest sensitivity of this RTD based micro-accelerometer is 560.2025 mV/g which is about 10 times larger than that of silicon based micro piezoresistive accelerometer, while the smallest one is 1.49135 mV/g. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  7. A novel micro-accelerometer with adjustable sensitivity based on resonant tunnelling diodes

    Xiong Ji-Jun; Mao Hai-Yang; Zhang Wen-Dong; Wang Kai-Qun

    2009-01-01

    Resonant tunnelling diodes (RTDs) have negative differential resistance effect, and the current-voltage charac-teristics change as a function of external stress, which is regarded as meso-piezoresistance effect of RTDs. In this paper, a novel micro-accelerometer based on AlAs/GaAs/Ino.1Gao.9As/GaAs/AlAs RTDs is designed and fabricated to be a four-beam-mass structure, and an RTD-Wheatstone bridge measurement system is established to test the ba-sic properties of this novel accelerometer. According to the experimental results, the sensitivity of the RTD based micro-accelerometer is adjustable within a range of 3 orders when the bias voltage of the sensor changes. The largest sensitivity of this RTD based micro-accelerometer is 560.2025 mV/g which is about 10 times larger than that of silicon based micro piezoresistive accelerometer, while the smallest one is 1.49135 mV/g.

  8. Voltage adjusting characteristics in terahertz transmission through Fabry-Pérot-based metamaterials

    Jun Luo

    2015-10-01

    Full Text Available Metallic electric split-ring resonators (SRRs with featured size in micrometer scale, which are connected by thin metal wires, are patterned to form a periodically distributed planar array. The arrayed metallic SRRs are fabricated on an n-doped gallium arsenide (n-GaAs layer grown directly over a semi-insulating gallium arsenide (SI-GaAs wafer. The patterned metal microstructures and n-GaAs layer construct a Schottky diode, which can support an external voltage applied to modify the device properties. The developed architectures present typical functional metamaterial characters, and thus is proposed to reveal voltage adjusting characteristics in the transmission of terahertz waves at normal incidence. We also demonstrate the terahertz transmission characteristics of the voltage controlled Fabry-Pérot-based metamaterial device, which is composed of arrayed metallic SRRs. To date, many metamaterials developed in earlier works have been used to regulate the transmission amplitude or phase at specific frequencies in terahertz wavelength range, which are mainly dominated by the inductance-capacitance (LC resonance mechanism. However, in our work, the external voltage controlled metamaterial device is developed, and the extraordinary transmission regulation characteristics based on both the Fabry-Pérot (FP resonance and relatively weak surface plasmon polariton (SPP resonance in 0.025-1.5 THz range, are presented. Our research therefore shows a potential application of the dual-mode-resonance-based metamaterial for improving terahertz transmission regulation.

  9. Iterative Dense Correspondence Correction Through Bundle Adjustment Feedback-Based Error Detection

    Hess-Flores, M A; Duchaineau, M A; Goldman, M J; Joy, K I

    2009-11-23

    A novel method to detect and correct inaccuracies in a set of unconstrained dense correspondences between two images is presented. Starting with a robust, general-purpose dense correspondence algorithm, an initial pose estimate and dense 3D scene reconstruction are obtained and bundle-adjusted. Reprojection errors are then computed for each correspondence pair, which is used as a metric to distinguish high and low-error correspondences. An affine neighborhood-based coarse-to-fine iterative search algorithm is then applied only on the high-error correspondences to correct their positions. Such an error detection and correction mechanism is novel for unconstrained dense correspondences, for example not obtained through epipolar geometry-based guided matching. Results indicate that correspondences in regions with issues such as occlusions, repetitive patterns and moving objects can be identified and corrected, such that a more accurate set of dense correspondences results from the feedback-based process, as proven by more accurate pose and structure estimates.

  10. Uncertainty study of the PWR pressure vessel fluence. Adjustment of the nuclear data base

    The code system devoted to the calculation of the sensitivity and uncertainty of of the neutron flux and reaction rates calculated by the transport codes, has been developed. Adjustment of the basic data to experimental results can be performed as well. Various sources of uncertainties can be taken into account, such as those due to the uncertainties in the cross-sections, response functions, fission spectrum and space distribution of neutron source, geometry and material composition uncertainties... One -As well as two- dimensional analysis can be performed. Linear perturbation theory is applied. The code system is sufficiently general to be used for various analysis in the fields of fission and fusion. The principal objective of our studies concerns the capsule dosimetry study realized in the framework of the 900 MWe PWR pressure vessel surveillance program. The analysis indicates that the present calculations, performed by the code TRIPOLI-2, using the ENDF/B-IV based, non-perturbed neutron cross-section library in 315 energy groups, allows to estimate the neutron flux and the reaction rates in the surveillance capsules and in the most calculated and measured reaction rates permits to reduce these uncertainties. The results obtained with the adjusted iron cross-sections, response functions and fission spectrum show that the agreement between the calculation and the experiment was improved to become within 10% approximately. The neutron flux deduced from the experiment is then extrapolated from the capsule to the most exposed pressure vessel location using the calculated lead factor. The uncertainty in this factor was estimated to be about 7%. (author). 39 refs., 52 figs., 30 tabs

  11. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  12. Evaluation and prediction of color-tunable organic light-emitting diodes based on carrier/exciton adjusting interlayer

    Liu, Shengqiang; Li, Jie; Du, Chunlei; Yu, Junsheng

    2015-07-01

    A color tuning index (ICT) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between ICT and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with ICT values, indicating that the ICT parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N'-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest ICT value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting ICT formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the ICT via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with ICT parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning.

  13. Evaluation and prediction of color-tunable organic light-emitting diodes based on carrier/exciton adjusting interlayer

    A color tuning index (ICT) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between ICT and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with ICT values, indicating that the ICT parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N′-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest ICT value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting ICT formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the ICT via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with ICT parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning

  14. Capital and Portfolio Risk under the Solvency Regulation Supervision Analysis of Partial Adjustment Model based on Property-liability Companies%偿付能力监管下韵资本与组合风险——基于产险公司局部联立调整模型的分析

    王丽珍; 李秀芳

    2012-01-01

    Ever since 2007, capital and stock increases trend m insurance compamc~ o~ ,~ , the required capital size are expanding, and also the frequency of capital increasing is improving. Although the ap- pearance of this phenomenon is inevitable, owing to the costliness and scarcity of capital, insurance industry and China Insurance Regulatory Commission all have paid more attention to the upsurge about capital. Furthermore, if the insurance companies can' t raise capital timely, they will not operate normally, and then it is bound to endanger social stability and the interest of insurance consumers. So it is meaningful to study the capital under the solvency regulation supervision and the special development stage. The capital is applicable to withstand unexpected loss, in this sense, the adjustment of capital is corresponding with risk level. Thus, we combine capital with risk to re- search the development of china' s property-liability companies. According to the research paradigm of banking studies about capital structure and portfolio risk, we employ partial adjustment model to the insurance area. Through the panel data of 34 Property-Liability insurance companies, this paper conduct the Three-stage least square ( 3 SLS) procedure to estimate a simultaneous equations model and examines the impact of capital determina- tion and portfolio risk under solvency regulation. In addition to this, we also consider the other factors, such as the structure of lines, the scale of asset, the reinsurance ratio, the return of asset under the research framework. Based on this, robustness tests with two broad heading including five types subsamples also present consistent results. Our key findings include four aspects. First of all, we find that capitalized insurers increase capital faster than under- capitalized insurers, which is different from the condition of American. This result implies that, on account of the rapid development of insurance industry currently, insurance

  15. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  16. Model-based geostatistics

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  17. Models of traumatic experiences and children's psychological adjustment: the roles of perceived parenting and the children's own resources and activity.

    Punamäki, R L; Qouta, S; el Sarraj, E

    1997-08-01

    The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting. PMID:9306648

  18. Real time adjustment of slow changing flow components in distributed urban runoff models

    Borup, Morten; Grum, M.; Mikkelsen, Peter Steen

    2011-01-01

    model states governing the infiltrating inflow based on downstream flow measurements. The fact that the infiltration processes follows a relative large time scale is used to estimate the part of the model residuals, at a gauged downstream location, that can be attributed to infiltration processes. This...... improvements for regular simulations as well as up to 10 hour forecasts. The updating method reduces the impact of non-representative precipitation estimates as well as model structural errors and leads to better overall modelling results....

  19. ADJUSTMENT FACTORS AND ADJUSTMENT STRUCTURE

    Tao Benzao

    2003-01-01

    In this paper, adjustment factors J and R put forward by professor Zhou Jiangwen are introduced and the nature of the adjustment factors and their role in evaluating adjustment structure is discussed and proved.

  20. A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada

    Simon, K. M.; James, T. S.; Dyke, A. S.

    2015-07-01

    A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.

  1. Stochastic Dynamic Model on the Consumption – Saving Decision for Adjusting Products and Services Supply According with Consumers` Attainability

    Gabriela Prelipcean

    2014-02-01

    Full Text Available The recent crisis and turbulences have significantly changed the consumers’ behavior, especially through its access possibility and satisfaction, but also the new dynamic flexible adjustment of the supply of goods and services. The access possibility and consumer satisfaction should be analyzed in a broader context of corporate responsibility, including financial institutions. This contribution gives an answer to the current situation in Romania as an emerging country, strongly affected by the global crisis. Empowering producers and harmonize their interests with the interests of consumers really require a significant revision of the quantitative models used to study long-term consumption-saving behavior, with a new model, adapted to the current conditions in Romania in the post-crisis context. Based on the general idea of the model developed by Hai, Krueger, Postlewaite (2013 we propose a new way of exploiting the results considering the dynamics of innovative adaptation based on Brownian motion, but also the integration of the cyclicality concept, the stochastic shocks analyzed by Lèvy and extensive interaction with capital markets characterized by higher returns and volatility.

  2. Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment

    Zhao, Hongjian; Xia, Shixiong; Yao, Rui; Niu, Qiang; Zhou, Yong

    2015-11-01

    Concatenating multicamera videos with differing centers of projection into a single panoramic video is a critical technology of many important applications. We propose a real-time video fusion approach to create wide field-of-view video. To provide a fast and accurate video registration method, we propose multistage hashing to find matched feature-point pairs from coarse to fine. In the first stage of multistage hashing, a short compact binary code is learned from all feature points, and then we calculate the Hamming distance between each two points to find the candidate-matched points. In the second stage, a long binary code is obtained by remapping the candidate points for fine matching. To tackle the distortion and scene depth variation of multiview frames in videos, we build hybrid transformation with depth adjustment. The depth compensation between two adjacent frames extends into multiple frames in an iterative model for successive video frames. We conduct several experiments with different dynamic scenes and camera numbers to verify the performance of the proposed real-time video fusion approach.

  3. Adjustable Autonomy Testbed

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  4. Thermo-adjustable mechanical properties of polymer, lipid-based complex fluids

    Firestone, Millicent; Lee, Sungwon

    2012-02-01

    Combined rheology (oscillatory and steady shear) and SAXS studies reveal details on the temperature-dependent, reversible mechanical properties of nonionic polymer, lipid-based complex fluids. Compositions prepared by introduction of the polymer as either a lipid conjugate or a triblock copolymer form an elastic gel as the temperature is increased to 18 C. The network is produced from PEO chain entanglement and physical crosslinks confined within the intervening aqueous layers of a multilamellar structured lyotropic mesophase. Although the complex fluids are weak gels, tuning of the gel strength can be achieved by temperature adjustment. The sol state formed at reduced temperature arises as a consequence of the well-solvated PEO chains adopting a non-interacting, conformational state. Complex fluids prepared with the triblock copolymers exhibit greater tunability in viscoelasticity than those containing the PEGylated-lipid conjugate because of the temperature-dependent water solubility of the central PPO block. The water solubility of PPO at reduced temperatures results in the polymer being expelled from the self-assembled amphiphilic bilayer, causing collapse of the swollen lamellar structure and loss of the PEO network. At elevated temperatures, the triblock reinserts into the bilayer producing an elastic gel. These studies identify macromolecular architectures for the facile preparation of dynamic soft materials possessing a range of mechanical properties that can be tuned by modest temperature control.

  5. Based on Motivation and Adjust Strategy%立足动机,调整策略

    杨洪

    2012-01-01

    旅游经营者和游客之间存在着旅游供求关系,因此,旅游经营者必须要做到能够很好地了解旅游者的行为动机,才能调整经营策略来满足旅游者的需求。本文在尽可能透彻地了解旅游者新旧两种动机的基础上,有的放矢对旅游经营者经营策略进行了研究。%There exists a relationship of supply and demand between operators and the tourists;therefore,tour operators should be able to have a good understanding of tourist behavior and motivation so that they can adjust business strategy to meeting the needs of tourists.In this paper,based on a thorough understanding of old and new motivations of the tourists,the author has a detailed research in the operating strategy on tourism operators.

  6. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  7. [Construction and validation of a multidimensional model of students' adjustment to college context].

    Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S

    2006-05-01

    This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process. PMID:17296040

  8. A novel space-based observation strategy for GEO objects based on daily pointing adjustment of multi-sensors

    Hu, Yun-peng; Li, Ke-bo; Xu, Wei; Chen, Lei; Huang, Jian-yu

    2016-08-01

    Space-based visible (SBV) program has been proved to be with a large advantage to observe geosynchronous earth orbit (GEO) objects. With the development of SBV observation started from 1996, many strategies have come out for the purpose of observing GEO objects more efficiently. However it is a big challenge to visit all the GEO objects in a relatively short time because of the distribution characteristics of GEO belt and limited field of view (FOV) of sensor. And it's also difficult to keep a high coverage of the GEO belt every day in a whole year. In this paper, a space-based observation strategy for GEO objects is designed based on the characteristics of the GEO belt. The mathematical formula of GEO belt is deduced and the evolvement of GEO objects is illustrated. There are basically two kinds of orientation strategies for most observation satellites, i.e., earth-oriented and inertia-directional. Influences of both strategies to their own observation regions are analyzed and compared with each other. A passive optical instrument with daily attitude-adjusting strategies is proposed to increase the daily coverage rate of GEO objects in a whole year. Furthermore, in order to observe more GEO objects in a relatively short time, the strategy of a satellite with multi-sensors is proposed. The installation parameters between different sensors are optimized, more than 98% of GEO satellites can be observed every day and almost all the GEO satellites can be observed every two days with 3 sensors (FOV: 6° × 6°) on the satellite under the strategy of daily pointing adjustment in a whole year.

  9. Dynamic capacity adjustment for virtual-path based networks using neuro-dynamic programming

    Şahin, Cem

    2003-01-01

    Cataloged from PDF version of article. Dynamic capacity adjustment is the process of updating the capacity reservation of a virtual path via signalling in the network. There are two important issues to be considered: bandwidth (resource) utilization and signaling traffic. Changing the capacity too frequently will lead to efficient usage of resources but has a disadvantage of increasing signaling traffic among the network elements. On the other hand, if the capacity is adjust...

  10. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    Belehaki Anna

    2012-12-01

    Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  11. An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model

    Purcell, A.; Tregoning, P.; Dehecq, A.

    2016-05-01

    The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.

  12. 基于线性规划模型的天津市产业结构低碳化转型研究%The Research on the Adjustment of Industrial Structure Low Carbon Transition in Tianjin Based on the Linear Programming Model

    吕明元; 李彦超; 宫璐一

    2014-01-01

    Since reform and opening,the economy of Tianjin has been developing rapidly. Meanwhile,it is an inefficient model of economic growth. In July 201 0,Tianjin was designated one of the low carbon cities in China. Therefore,Tianjin must adjust the industrial structure and put the strategy of low carbon transition into practice. Firstly,this article estab-lished a multi-objective linear programming model and determined the parameters of the economic development objectives and the carbon emissions target. By solving the model,it predicted the goal of Tianjin carbon emissions of the thrice indus-trial could be achieved. Finally,according to the results of the model,it advised how to adjust the industrial structure of Tianjin and enact policy.%建立多目标产业线性规划模型,并确定经济发展目标参数、碳排放量目标参数。通过对模型求解,预测2015年天津市三次产业的碳排放量,预计可以实现天津节能减排的目标。根据线性规划模型的结果,提出天津市产业结构优化和政策支持方向的对策性建议。

  13. Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25

    The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop

  14. Assessment of an adjustment factor to model radar range dependent error

    Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.

    2012-09-01

    Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.

  15. Homoclinic connections and subcritical Neimark bifurcation in a duopoly model with adaptively adjusted productions

    Agliari, Anna [Dipartimento di Scienze Economiche e Sociali, Universita Cattolica del Sacro Cuore, Via Emilia Parmense, 84, 29100 Piacenza (Italy)]. E-mail: anna.agliari@unicatt.it

    2006-08-15

    In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed.

  16. Homoclinic connections and subcritical Neimark bifurcation in a duopoly model with adaptively adjusted productions

    In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed

  17. Propagation of biases in climate models from the synoptic to the regional scale: Implications for bias adjustment

    Addor, Nans; Rohrer, Marco; Furrer, Reinhard; Seibert, Jan

    2016-03-01

    Bias adjustment methods usually do not account for the origins of biases in climate models and instead perform empirical adjustments. Biases in the synoptic circulation are for instance often overlooked when postprocessing regional climate model (RCM) simulations driven by general circulation models (GCMs). Yet considering atmospheric circulation helps to establish links between the synoptic and the regional scale, and thereby provides insights into the physical processes leading to RCM biases. Here we investigate how synoptic circulation biases impact regional climate simulations and influence our ability to mitigate biases in precipitation and temperature using quantile mapping. We considered 20 GCM-RCM combinations from the ENSEMBLES project and characterized the dominant atmospheric flow over the Alpine domain using circulation types. We report in particular a systematic overestimation of the frequency of westerly flow in winter. We show that it contributes to the generalized overestimation of winter precipitation over Switzerland, and this wet regional bias can be reduced by improving the simulation of synoptic circulation. We also demonstrate that statistical bias adjustment relying on quantile mapping is sensitive to circulation biases, which leads to residual errors in the postprocessed time series. Overall, decomposing GCM-RCM time series using circulation types reveals connections missed by analyses relying on monthly or seasonal values. Our results underscore the necessity to better diagnose process misrepresentation in climate models to progress with bias adjustment and impact modeling.

  18. Subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory for nonvolatile operation

    Huh, In; Cheon, Woo Young; Choi, Woo Young

    2016-04-01

    A subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory (SAT RAM) has been proposed and fabricated for low-power nonvolatile memory applications. The proposed SAT RAM cell demonstrates adjustable subthreshold swing (SS) depending on stored information: small SS in the erase state ("1" state) and large SS in the program state ("0" state). Thus, SAT RAM cells can achieve low read voltage (Vread) with a large memory window in addition to the effective suppression of ambipolar behavior. These unique features of the SAT RAM are originated from the locally stored charge, which modulates the tunneling barrier width (Wtun) of the source-to-channel tunneling junction.

  19. Evaluation and prediction of color-tunable organic light-emitting diodes based on carrier/exciton adjusting interlayer

    Liu, Shengqiang; Li, Jie; Yu, Junsheng, E-mail: jsyu@uestc.edu.cn [State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Information, University of Electronic Science and Technology of China (UESTC), Chengdu 610054 (China); Du, Chunlei [Chengdu Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209 (China)

    2015-07-27

    A color tuning index (I{sub CT}) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between I{sub CT} and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with I{sub CT} values, indicating that the I{sub CT} parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N′-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest I{sub CT} value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting I{sub CT} formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the I{sub CT} via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with I{sub CT} parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning.

  20. Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0

    Yokoyama, K., E-mail: yokoyama.kenji09@jaea.go.jp; Ishikawa, M.

    2015-01-15

    The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on {sup 239}Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed.

  1. Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0

    The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on 239Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed

  2. Model for U.S. Farm Financial Adjustment Analysis of Alternative Public Policies

    Doye, Damona G.; Robert W Jolly

    1987-01-01

    As the agricultural sector adjusts to financial stress and constantly changing national and international policies, additional structural changes are expected. The capacity for adjustment through existing agricultural asset markets depends on both the extent of farm restructuring and the resiliency of the markets and agricultural institutions. Research is needed to estimate farm financial restructuring needs and the expected duration of the restructuring process. Projecting the magnitude of c...

  3. Micro-econometric models for analysing capital adjustment on Dutch pig farms

    Gardebroek, C.

    2001-01-01

    Farmers operate their business in a dynamic environment. Fluctuating prices, evolving agricultural and environmental policies, technological change and increasing consumer demands for product quality (e.g. with respect to environmental friendly production methods, animal welfare and food safety) frequently require adjustment of production and input levels on individual farms. Quantities of variable production factors like animal feed or pesticides can usually be adjusted easily together with ...

  4. Improvement for Speech Signal based on Post Wiener Filter and Adjustable Beam-Former

    Xiaorong Tong; Xiangfeng Meng

    2013-01-01

    In this study, a two-stage filter structure is introduced for speech enhancement. The first stage is an adjustable filter and sum beam-former with four-microphone array. The control of beam-forming filter is realized by adjusting only a single control variable. Different from the adaptive beam-forming filter, the proposed filter structure does not bring to any adaptive error noise, thus, it also does not bring the trouble to the second stage of the speech signal processing. The second stage o...

  5. Convexity Adjustments

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations o...

  6. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available

  7. Role of atmospheric adjustments in the tropical Indian Ocean warming during the 20th century in climate models

    Du, Yan; Xie, Shang-Ping

    2008-04-01

    The tropical Indian Ocean has been warming steadily since 1950s, a trend simulated by a large ensemble of climate models. In models, changes in net surface heat flux are small and the warming is trapped in the top 125 m depth. Analysis of the model output suggests the following quasi-equilibrium adjustments among various surface heat flux components. The warming is triggered by the greenhouse gas-induced increase in downward longwave radiation, amplified by the water vapor feedback and atmospheric adjustments such as weakened winds that act to suppress turbulent heat flux from the ocean. The sea surface temperature dependency of evaporation is the major damping mechanism. The simulated changes in surface solar radiation vary considerably among models and are highly correlated with inter-model variability in SST trend, illustrating the need to reduce uncertainties in cloud simulation.

  8. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.

    2015-12-01

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.

  9. Linear identification and model adjustment of a PEM fuel cell stack

    Kunusch, C.; Puleston, P.F.; More, J.J. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A. [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M.A. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)

    2008-07-15

    In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)

  10. Internal Working Models and Adjustment of Physically Abused Children: The Mediating Role of Self-Regulatory Abilities

    Hawkins, Amy L.; Haskett, Mary E.

    2014-01-01

    Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…

  11. Seat Adjustment Design of an Intelligent Robotic Wheelchair Based on the Stewart Platform

    Po Er Hsu

    2013-03-01

    Full Text Available A wheelchair user makes direct contact with the wheelchair seat, which serves as the interface between the user and the wheelchair, for much of any given day. Seat adjustment design is of crucial importance in providing proper seating posture and comfort. This paper presents a multiple‐DOF (degrees of freedom seat adjustment mechanism, which is intended to increase the independence of the wheelchair user while maintaining a concise structure, light weight, and intuitive control interface. This four‐axis Stewart platform is capable of heaving, pitching, and swaying to provide seat elevation, tilt‐in‐space, and sideways movement functions. The geometry and types of joints of this mechanism are carefully arranged so that only one actuator needs to be controlled, enabling the wheelchair user to adjust the seat by simply pressing a button. The seat is also equipped with soft pressure‐sensing pads to provide pressure management by adjusting the seat mechanism once continuous and concentrated pressure is detected. Finally, by comparing with the manual wheelchair, the proposed mechanism demonstrated the easier and more convenient operation with less effort for transfer assistance.

  12. Adjustable Robust Optimizations with Decision Rules Based on Inexact Revealed Data

    de Ruiter, F.J.C.T.; Ben-Tal, A.; Brekelmans, R.C.M.; den Hertog, D.

    2014-01-01

    Abstract: Adjustable robust optimization (ARO) is a technique to solve dynamic (multistage) optimization problems. In ARO, the decision in each stage is a function of the information accumulated from the previous periods on the values of the uncertain parameters. This information, however, is often

  13. Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems

    Malin, Jane T.; Schreckenghost, Debra K.

    2001-01-01

    In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.

  14. Improving the global applicability of the RUSLE modeladjustment of the topographical and rainfall erosivity factors

    V. Naipal

    2015-03-01

    Full Text Available Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale, and this limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE model is due to its simple structure and empirical basis a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial scale applications often rely on coarse data input, which is not compatible with the local scale at which the model is parameterized. This study aimed at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting soil erosion rates to extensive empirical databases on soil erosion from the USA and Europe. Adjusting the topographical factor required scaling of slope according to the fractal method, which resulted in improved topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that are in good comparison with high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted RUSLE model not only shows a global higher mean soil erosion rate but also more variability in the soil erosion rates. Comparison to empirical datasets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still

  15. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  16. An effect of physical activity-based recreation programs on children's optimism, humor styles, and school life adjustment.

    Koo, Jae-Eun; Lee, Gwang-Uk

    2015-06-01

    This study puts its purpose in identifying the effect of the participation in physical activity-based recreation programs on the optimism of children, humor styles, and school life adjustment. To achieve the study purpose, this study selected 190 subjects as samples were extracted targeting senior students of elementary schools who participated in the physical activity-based recreation in the metropolitan areas as of 2014. As research methods, questionnaire papers were used and reliability analysis, factor analysis, correlation analysis, and multiple regression analysis were conducted by utilizing SPSS 18.0 after inputting analysis data into the computer. The study results, obtained in this study are as follows: First, in terms of the effect of the participation in physical activity-based recreation programs on optimism, participation frequency and participation intensity would have an effect on optimism, while participation period would have a significant effect on being positive among the sub-factors of optimism. Second, participation in physical activity-based recreation programs might have a significant effect on humor styles. Third, in terms of the effect of the participation in physical activity-based recreation programs on the school life adjustment, it was demonstrated that participation period and participation intensity would have a significant effect on school life adjustment, while participation frequency would have a significant effect on regulation-observance and school life satisfaction. PMID:26171384

  17. Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand

    Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat

    2016-07-01

    The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the

  18. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model) for the Greater London

    Ali P. Yunus; Ram Avtar; Steven Kraines; Masumi Yamamuro; Fredrik Lindberg; C. S. B. Grimmond

    2016-01-01

    Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth asse...

  19. Uncertainties in tidally adjusted estimates of sea level rise flooding (bathtub model) for the Greater London

    Ali P. Yunus; Avtar, Ram; Kraines, Steven; Yamamuro, Masumi; Lindberg, Fredrik; C. S. B. Grimmond

    2016-01-01

    Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC...

  20. Filling Gaps in the Acculturation Gap-Distress Model: Heritage Cultural Maintenance and Adjustment in Mexican-American Families.

    Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J

    2016-07-01

    The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment. PMID:26759225

  1. The adjusted churn: an index of competitive balance for sports leagues based on changes in team standings over time

    Daniel Mizak; Anthony Stair; John Neral

    2007-01-01

    This paper introduces an index called the adjusted churn, designed to measure competitive balance in sports leagues based on changes in team standings over time. This is a simple yet powerful index that varies between zero and one. A value of zero indicates no change in league standings from year to year and therefore minimal competitive balance. A value of one indicates the maximum possible turnover in league standings from year to year and therefore a high level of competitive balance. Appl...

  2. The self-adjusting file (SAF) system: An evidence-based update

    Zvi Metzger

    2014-01-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusion They may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi fi...

  3. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    He, Peng; Eriksson, Frank; Scheike, Thomas H.; Zhang, Mei Jie

    2016-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the...... covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks....

  4. Nonlinear Adjustment Model with Integral and Its Application to Super Resolution Image Reconstruction

    Zhu, Jianjun; Fan, Donghao; Zhou, Cui; Zhou, Jinghong

    2015-01-01

    The process of super resolution image reconstruction is such a process that multiple observations are taken on the same target to obtain low resolution images, then the low resolution images are used to reconstruct the real image of the target, namely high resolution image. This process is similar to that in the field of surveying and mapping, in which the same target is observed repeatedly and the optimal values is calculated with surveying adjustment methods. In this paper, the method of su...

  5. Emergy-Based Adjustment of the Agricultural Structure in a Low-Carbon Economy in Manas County of China

    Sergio Ulgiati; Xinshi Zhang; Baohua Yu; Xiaobin Dong; Yufang Zhang; Weijia Cui; Bin Xun

    2011-01-01

    The emergy concept, integrated with a multi-objective linear programming method, was used to model the agricultural structure of Xinjiang Uygur Autonomous Region under the consideration of the need to develop a low-carbon economy. The emergy indices before and after the structural optimization were evaluated. In the reconstructed model, the proportions of agriculture, forestry and artificial grassland should be adjusted from 19:2:1 to 5.2:1:2.5; the Emergy Yield Ratio (1.48) was higher than t...

  6. Intercomparison of predicted displacement rates based on neutron spectrum adjustments (REAL-80 exercise)

    Zijp, W.L.; Nolthenius, H.J.; Szondi, E.J.; Verhaag, G.C.; Zsolnay, E.M.

    1984-11-01

    The aim of the interlaboratory REAL-80 exercise, organized by the International Atomic Energy Agency, was to determine the state of the art in 1981 of the capabilities of laboratories to adjust neutron spectrum information on the basis of a set of experimental activation rates, and to subsequently predict the number of displacements in steel, together with its uncertainty. The input information distributed to participating laboratories comprised values, variances, and covariances for a set of input fluence rates, for a set of activation and damage cross-section data, and for a set of experimentally measured reaction rates. The exercise dealt with two clearly different spectra: the thermal Oak Ridge Research Reactor (ORR) spectrum and the fast YAYOI spectrum. Out of 30 laboratories asked to participate, 13 laboratories contributed 33 solutions for ORR and 35 solutions for YAYOI. The spectral shapes of the solution spectra showed considerable spread, both for the ORR and YAYOI spectra. When the series of predicted activation rates in nickel and the predicted displacement rates in steel derived for all solutions is considered, one cannot observe significant differences due to the adjustment algorithm used. The largest deviations seem to be due to effects related to group structure and/or changes in the input data. When comparing the predicted activation rate in nickel with its available measured value, the authors observe that the predicted value (averaged over all solutions) is lower than the measured value.

  7. Intercomparison of predicted displacement rates based on neutron spectrum adjustments (REAL-80 exercise)

    The aim of the interlaboratory REAL-80 exercise, organized by the International Atomic Energy Agency, was to determine the state of the art in 1981 of the capabilities of laboratories to adjust neutron spectrum information on the basis of a set of experimental activation rates, and to subsequently predict the number of displacements in steel, together with its uncertainty. The input information distributed to participating laboratories comprised values, variances, and covariances for a set of input fluence rates, for a set of activation and damage cross-section data, and for a set of experimentally measured reaction rates. The exercise dealt with two clearly different spectra: the thermal Oak Ridge Research Reactor (ORR) spectrum and the fast YAYOI spectrum. Out of 30 laboratories asked to participate, 13 laboratories contributed 33 solutions for ORR and 35 solutions for YAYOI. The spectral shapes of the solution spectra showed considerable spread, both for the ORR and YAYOI spectra. When the series of predicted activation rates in nickel and the predicted displacement rates in steel derived for all solutions is considered, one cannot observe significant differences due to the adjustment algorithm used. The largest deviations seem to be due to effects related to group structure and/or changes in the input data. When comparing the predicted activation rate in nickel with its available measured value, the authors observe that the predicted value (averaged over all solutions) is lower than the measured value

  8. The relationship between effectiveness and costs measured by a risk-adjusted case-mix system: multicentre study of Catalonian population data bases

    Flor-Serra Ferran

    2009-06-01

    Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression

  9. Storage tube with a base adjustable for height by remote handling, particularly for spent fuel storage in pools

    One aim of this invention is the fabrication of a storage tube with a base adjustable for height by remote handling, for the in-pools storing of irradiated nuclear fuels. This device possesses the following features with respect to the mechanism for placing the base in the supporting position: - use of rotation without rectilinear friction or gears, - impossibility for dust to accumulate on the mechanism, - possible control by handling pole, - simplicity and low mass production cost. Such features can of course be used to advantage for the tubes to store elements of various lengths irrespective of the nuclear energy

  10. Dynamic Online Bandwidth Adjustment Scheme Based on Kalai-Smorodinsky Bargaining Solution

    Kim, Sungwook

    Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.

  11. Adjustment and Prediction of Machine Factors Based on Neural Artificial Intelligence

    Since the discovery of x-ray, it is use in examination has become an integral part of medical diagnostic radiology. The use of X-ray is harmful to human beings but recent technological advances and regulatory constraints have made the medical Xray much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict x-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.

  12. Adjustment and Prediction of X-Ray Machine Factors Based on Neural Artificial Inculcating

    Since the discovery of X-rays, their use in examination has become an integral part of medical diagnostic radiology. The use of X-rays is harmful to human beings but recent technological advances and regulatory constraints have made the medical X-rays much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict X-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.

  13. Repatriation Adjustment: Literature Review

    Gamze Arman

    2009-12-01

    Full Text Available Expatriation is a widely studied area of research in work and organizational psychology. After expatriates accomplish their missions in host countries, they return to their countries and this process is called repatriation. Adjustment constitutes a crucial part in repatriation research. In the present literature review, research about repatriation adjustment was reviewed with the aim of defining the whole picture in this phenomenon. Present research was classified on the basis of a theoretical model of repatriation adjustment. Basic frame consisted of antecedents, adjustment, outcomes as main variables and personal characteristics/coping strategies and organizational strategies as moderating variables.

  14. Model-based segmentation

    Heimann, Tobias; Delingette, Hervé

    2011-01-01

    This chapter starts with a brief introduction into model-based segmentation, explaining the basic concepts and different approaches. Subsequently, two segmentation approaches are presented in more detail: First, the method of deformable simplex meshes is described, explaining the special properties of the simplex mesh and the formulation of the internal forces. Common choices for image forces are presented, and how to evolve the mesh to adapt to certain structures. Second, the method of point...

  15. Constructing seasonally adjusted data with time-varying confidence intervals

    Koopman, Siem Jan; Franses, Philip Hans

    2001-01-01

    textabstractSeasonal adjustment methods transform observed time series data into estimated data, where these estimated data are constructed such that they show no or almost no seasonal variation. An advantage of model-based methods is that these can provide confidence intervals around the seasonally adjusted data. One particularly useful time series model for seasonal adjustment is the basic structural time series [BSM] model. The usual premise of the BSM is that the variance of each of the c...

  16. “A model of mother-child Adjustment in Arab Muslim Immigrants to the US”

    Hough, Edythe S.; Templin, Thomas N.; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne

    2009-01-01

    We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and ...

  17. Adjustment disorder

    American Psychiatric Association. Diagnostic and statistical manual of mental disorders . 5th ed. Arlington, VA: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Fava ...

  18. Adjustment disorder

    American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...

  19. The Design of Fiscal Adjustments

    Alesina, Alberto Francesco; Ardagna, Silvia

    2013-01-01

    This paper offers three results. First, in line with the previous literature, we confirm that fiscal adjustments based mostly on the spending side are less likely to be reversed. Second, spending based fiscal adjustments have caused smaller recessions than tax based fiscal adjustments. Finally, certain combinations of policies have made it possible for spending based fiscal adjustments to be associated with growth in the economy even on impact rather than with a recession. Thus, expansionary ...

  20. Automatic standard plane adjustment on mobile C-Arm CT images of the calcaneus using atlas-based feature registration

    Brehler, Michael; Görres, Joseph; Wolf, Ivo; Franke, Jochen; von Recum, Jan; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana

    2014-03-01

    Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.

  1. Model Based Definition

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  2. Virtual Environments, Online Racial Discrimination, and Adjustment among a Diverse, School-Based Sample of Adolescents

    Tynes, Brendesha M.; Rose, Chad A.; Hiss, Sophia; Umaña-Taylor, Adriana J.; Mitchell, Kimberly; Williams, David

    2015-01-01

    Given the recent rise in online hate activity and the increased amount of time adolescents spend with media, more research is needed on their experiences with racial discrimination in virtual environments. This cross-sectional study examines the association between amount of time spent online, traditional and online racial discrimination and adolescent adjustment, including depressive symptoms, anxiety and externalizing behaviors. The study also explores the role that social identities, including race and gender, play in these associations. Online surveys were administered to 627 sixth through twelfth graders in K-8, middle and high schools. Multiple regression results revealed that discrimination online was associated with all three outcome variables. Additionally, a significant interaction between online discrimination by time online was found for externalizing behaviors indicating that increased time online and higher levels of online discrimination are associated with more problem behavior. This study highlights the need for clinicians, educational professionals and researchers to attend to race-related experiences online as well as in traditional environments. PMID:27134698

  3. Research on Gear-box Fault Diagnosis Method Based on Adjusting-learning-rate PSO Neural Network

    PAN Hong-xia; MA Qing-feng

    2006-01-01

    Based on the research of Particle Swarm Optimization (PSO) learning rate, two learning rates are changed linearly with velocity-formula evolving in order to adjust the proportion of social part and cognitional part; then the methods are applied to BP neural network training, the convergence rate is heavily accelerated and locally optional solution is avoided. According to actual data of two levels compound-box in vibration lab, signals are analyzed and their characteristic values are abstracted. By applying the trained BP neural networks to compound-box fault diagnosis, it is indicated that the methods are sound effective.

  4. Reaction norms models in the adjusted weight at 550 days of age for Polled Nellore cattle in Northeast Brazil

    Diego Pagung Ambrosini

    2014-07-01

    Full Text Available The objective of this study was to evaluate the genotype-environment interaction (GEI in the body weight adjusted to 550 days of age (W550 of Polled Nellore cattle raised in Northeastern Brazil using reaction norms (RN models. Hierarchical RN models included fixed effects for age of cow (linear and quadratic and random effects for contemporary groups (CG and additive genetic RN level and slope. Four RN hierarchical models (RNHM were used. The RNHM2S uses the solutions of contemporary groups estimated by the standard animal model (AM and considers them as environmental level for predicting the reaction norms and the RNHM1S, which jointly estimate these two sets of unknowns. Two versions were considered for both models, one with a homogeneous (Hm and another with a heterogeneous (He residual variance. The one-step homogeneous residual variance model (RNHM1SHm offered better adjustment to the data when compared with other models. For the RNHM1SHm model, estimates of additive genetic variance and heritability increased with environment improvement (260.75±75.80 kg2 to 4298.39±356.56 kg2 and 0.22±0.05 to 0.82±0.01, for low- and high-performance environments, respectively. High correlation (0.97±0.01 between the intercept and the slope of RN shows that animals with higher genetic values respond better to environment improvement. In the evaluation of breeding sires with higher genetic values in the various environments using Spearman's correlation, values between 0 and 0.98 were observed, pointing to high reclassification, especially among genetic values obtained by the animal model in comparison with those obtained via RNHM1SHm. The existence of GEI is confirmed, and so is the need for specific evaluations for low, medium and high level production environments.

  5. Mathematical models for adjustment of in vitro gas production at different incubation times and kinetics of corn silages

    João Pedro Velho

    2014-09-01

    Full Text Available In the present work, with whole plant silage corn at different stages of maturity, aimed to evaluate the mathematical models Exponential, France, Gompertz and Logistic to study the kinetics of gas production in vitro incubations for 24 and 48 hours. A semi-automated in vitro gas production technique was used during one, three, six, eight, ten, 12, 14, 16, 22, 24, 31, 36, 42 and 48 hours of incubation periods. Model adjustment was evaluated by means of mean square of error, mean bias, root mean square prediction error and residual error. The Gompertz mathematical model allowed the best adjustment to describe the gas production kinetics of maize silages, regardless of incubation period. The France model was not adequate to describe gas kinetics of incubation periods equal or lower than 48 hours. The in vitro gas production technique was efficient to detect differences in nutritional value of maize silages from different growth stages. Twenty four hours in vitro incubation periods do not mask treatment effects, whilst 48 hour periods are inadequate to measure silage digestibility.

  6. Influence of the discretisation net on the accuracy of results in a diagnostic model for wind field adjustment

    Lucyna Brzozowska

    2013-04-01

    Full Text Available Computational efficiency of model is a key factor which determines its practical applica-tion. The paper presents the algorithms which ensure the computational efficiency of a model of the air velocity field. The main step of modelling the air velocity field by means of the diagnostic model is the procedure of adjusting the initial field. The initial wind field is computed by interpolation of data from meteorological stations. The goal of adjusting the initial field is to ensure that the air velocity field satisfies the continuity equation in an area with a complex landform. The task is reduced to solving the Poisson equation. Finite difference methods with equidistant and non-equidistant nodes are applied. The discretisation net must be properly dense for a complex terrain. For an equidistant net this means that the computing time is extended and a numerical simulation might not be efficient. This problem can be reduced by using a non-equidistant mesh, in which the nodes near the places where we expect a significant change in the air velocity are condensed. In this paper the non-equidistant net is adapted for an example of terrain with an isolated hill. The hybrid approach is proposed in this work. The parabolic function for node distribution is used in the horizontal direction when in the vertical direction the Chebyshev nodes are applied. The results of the numerical analysis show the usefulness of a non-equidistant net in terms of accuracy and computational effectiveness.

  7. Ratios as a size adjustment in morphometrics.

    Albrecht, G H; Gelvin, B R; Hartman, S E

    1993-08-01

    Simple ratios in which a measurement variable is divided by a size variable are commonly used but known to be inadequate for eliminating size correlations from morphometric data. Deficiencies in the simple ratio can be alleviated by incorporating regression coefficients describing the bivariate relationship between the measurement and size variables. Recommendations have included: 1) subtracting the regression intercept to force the bivariate relationship through the origin (intercept-adjusted ratios); 2) exponentiating either the measurement or the size variable using an allometry coefficient to achieve linearity (allometrically adjusted ratios); or 3) both subtracting the intercept and exponentiating (fully adjusted ratios). These three strategies for deriving size-adjusted ratios imply different data models for describing the bivariate relationship between the measurement and size variables (i.e., the linear, simple allometric, and full allometric models, respectively). Algebraic rearrangement of the equation associated with each data model leads to a correctly formulated adjusted ratio whose expected value is constant (i.e., size correlation is eliminated). Alternatively, simple algebra can be used to derive an expected value function for assessing whether any proposed ratio formula is effective in eliminating size correlations. Some published ratio adjustments were incorrectly formulated as indicated by expected values that remain a function of size after ratio transformation. Regression coefficients incorporated into adjusted ratios must be estimated using least-squares regression of the measurement variable on the size variable. Use of parameters estimated by any other regression technique (e.g., major axis or reduced major axis) results in residual correlations between size and the adjusted measurement variable. Correctly formulated adjusted ratios, whose parameters are estimated by least-squares methods, do control for size correlations. The size-adjusted

  8. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    Mehran, Ali [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; AghaKouchak, Amir [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; Phillips, Thomas J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-02-25

    Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but that their replication of observed precipitation over arid regions and certain sub-continental regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (e.g., the 75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g. western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, inter-model variations in bias over Australia and Amazonia are considerable. The Quantile Bias (QB) analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. Lastly, we found that a simple mean-field bias removal improves the overall B and VHI values, but does not make a significant improvement in these model performance metrics at high quantiles of precipitation.

  9. Pervasive Computing Location-aware Model Based on Ontology

    PU Fang; CAI Hai-bin; CAO Qi-ying; SUN Dao-qing; LI Tong

    2008-01-01

    In order to integrate heterogeneous location-aware systems into pervasive computing environment, a novel pervasive computing location-aware model based on ontology is presented. A location-aware model ontology (LMO) is constructed. The location-aware model has the capabilities of sharing knowledge, reasoning and adjusting the usage policies of services dynamically through a unified semantic location manner. At last, the work process of our proposed location-aware model is explained by an application scenario.

  10. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942

  11. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Kwang Cheol Shin

    2009-02-01

    Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  12. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers.

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942

  13. Self-Tuning Insulin Adjustment Algorithm for Type 1 Diabetic Patients based on Multi-Doses Regime

    D. U. Campos-Delgado

    2005-01-01

    Full Text Available A self-tuning algorithm is presented for on-line insulin dosage adjustment in type 1 diabetic patients (chronic stage. The algorithm suggested does not need information of the patient insulin–glucose dynamics (model-free. Three doses are programmed daily, where a combination of two types of insulin: rapid/short and intermediate/long acting is injected into the patient through a subcutaneous route. The doses adaptation is performed by reducing the error in the blood glucose level from euglycemics. In this way, a total of five doses are tuned per day: three rapid/short and two intermediate/long, where there is large penalty to avoid hypoglycemic scenarios. Closed-loop simulation results are illustrated using a detailed nonlinear model of the subcutaneous insulin–glucose dynamics in a type 1 diabetic patient with meal intake.

  14. Adjustment of an Intensive Care Unit (ICU Data in Fuzzy C-Regression Models

    Mohd Saifullah Rusiman

    2013-02-01

    Full Text Available This research is an attempt to present a proper methodology in data modification by using analytical hierarchy process (AHP technique and fuzzy c-mean (FCM model. The continuous data were built from binary data using analytical hierarchy process (AHP. Whereas, the binary data were created from continuous data using fuzzy cmeans (FCM model. The models used in this research are fuzzy c-regression models (FCRM. A case study in scale of health at an intensive care unit (ICU ward using the AHP, FCM model and FCRM models was carried out. There are six independent variables involved in this study. There are four cases considered as a result of using AHP technique and FCM model toward independent data. After comparing the four cases, it was found that case 4 appeared to be the best model, having the lowest mean square error (MSE. The original data have the MSE value of 97.33, while the data of case 4 have MSE by 83.48. This means that the AHP technique can lower the MSE, while the FCM model cannot lower the MSE in modelling scale of health in the ICU. In other words, it can be claimed that the AHP technique can increase the accuracy of modelling prediction.

  15. Adjustments to Accounting Profit in Determination of the Income Tax Base: Evolution in the Czech Republic

    Mejzlík, Ladislav; Vítek, Leoš; Roe, Jana

    2014-01-01

    The article analyzes the main trends in income, tax base and tax deductions for Czech companies in years 1993 – 2012. After an initial survey of the problem, the article describes the issue of national accounting policy regulation in relation to IFRS and shows the evolution of main macroeconomic indicators of profitability and corporate taxation in the EU and the Czech Republic. The following part is based on data from the Ministry of Finance of the Czech Republic and monitors the development...

  16. Emergy-Based Adjustment of the Agricultural Structure in a Low-Carbon Economy in Manas County of China

    Sergio Ulgiati

    2011-09-01

    Full Text Available The emergy concept, integrated with a multi-objective linear programming method, was used to model the agricultural structure of Xinjiang Uygur Autonomous Region under the consideration of the need to develop a low-carbon economy. The emergy indices before and after the structural optimization were evaluated. In the reconstructed model, the proportions of agriculture, forestry and artificial grassland should be adjusted from 19:2:1 to 5.2:1:2.5; the Emergy Yield Ratio (1.48 was higher than the average local (0.49 and national levels (0.27; and the Emergy Investment Ratio (11.1 was higher than the current structure (4.93 and that obtained from the 2003 data (0.055 in Xinjiang Uygur Autonomous Region, the Water Emergy Cost (0.055 should be reduced compared to that before the adjustment (0.088. The measurement of all the parameters validated the positive impact of the modeled agricultural structure. The self-sufficiency ratio of the system increased from the original level of 0.106 to 0.432, which indicated a better coupling effect among the subsystems within the whole system. The comparative advantage index between the two systems before and after optimization was approximately 2:1. When the mountain ecosystem service value was considered, excessive animal husbandry led to a 1.41 × 1010 RMB·a−1 indirect economic loss, which was 4.15 times the GDP during the same time period. The functional improvement of the modeled structure supports the plan to “construct a central oasis and protect the surrounding mountains and deserts” to develop a sustainable agricultural system. Conserved natural grassland can make a large contribution to the carbon storage; and therefore, it is wise alternative that promote a low-carbon economic development strategy.

  17. Comparison of sire breed solutions for growth traits adjusted by mean expected progeny differences to a 1993 base.

    Barkhouse, K L; Van Vleck, L D; Cundiff, L V; Buchanan, D S; Marshall, D M

    1998-09-01

    Records on growth traits were obtained from five Midwestern agricultural experiment stations as part of a beef cattle crossbreeding project (NC-196). Records on birth weight (BWT, n =3,490), weaning weight (WWT, n = 3,237), and yearling weight (YWT, n = 1,372) were analyzed within locations and pooled across locations to obtain estimates of breed of sire differences. Solutions for breed of sire differences were adjusted to the common base year of 1993. Then, factors to use with within-breed expected progeny differences (EPD) to obtain across-breed EPD were calculated. These factors were compared with factors obtained from similar analyses of records from the U. S. Meat Animal Research Center (MARC). Progeny of Brahman sires mated to Bos taurus cows were heaviest at birth and among the lightest at weaning. Simmental and Gelbvieh sires produced the heaviest progeny at weaning. Estimates of heritability pooled across locations were .34, .19, and .07 for BWT, WWT, and YWT, respectively. Regression coefficients of progeny performance on EPD of sire were 1.25+/-.09, .98+/-.13, and .62+/-.18 for BWT, WWT, and YWT, respectively. Rankings of breeds of sire generally did not change when adjusted for sire sampling. Rankings were generally similar to those previously reported for MARC data, except for Limousin and Charolais sires, which ranked lower for BWT and WWT at NC-196 locations than at MARC. Adjustment factors used to obtain across-breed EPD were largest for Brahman for BWT and for Gelbvieh for WWT. The data for YWT allow only comparison of Angus with Simmental and of Gelbvieh with Limousin. PMID:9781484

  18. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal scale of adjustment

    Lee, H.; Seo, D.-J.; Liu, Y; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrologic forecasting, we carried out a set of real-world experiments in which streamflow data is assimilated into the gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the...

  19. Variational assimilation of streamflow into operational distributed hydrologic models: effect of spatiotemporal adjustment scale

    Lee, H.; D.-J. Seo; Liu, Y; Koren, V.; McKee, P.; Corby, R.

    2012-01-01

    State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National W...

  20. Dynamic Modeling of Adjustable-Speed Pumped Storage Hydropower Plant: Preprint

    Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.

    2015-04-06

    Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.

  1. Users Guide to SAMINT: A Code for Nuclear Data Adjustment with SAMMY Based on Integral Experiments

    Sobes, Vladimir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arbanas, Goran [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-10-01

    The purpose of this project is to couple differential and integral data evaluation in a continuous-energy framework. More specifically, the goal is to use the Generalized Linear Least Squares methodology employed in TSURFER to update the parameters of a resolved resonance region evaluation directly. Recognizing that the GLLS methodology in TSURFER is identical to the mathematical description of the simple Bayesian updating carried out in SAMMY, the computer code SAMINT was created to help use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Minimal modifications of SAMMY are required when used with SAMINT to make resonance parameter updates based on integral experimental data.

  2. 20 CFR 418.1201 - When will we determine your income-related monthly adjustment amount based on the modified...

    2010-04-01

    ... Recent Tax Year's Modified Adjusted Gross Income § 418.1201 When will we determine your income-related...-changing event results in a significant reduction in your modified adjusted gross income for the year which... reduction in your modified adjusted gross income is one that results in the decrease or elimination of...

  3. Sliding Mode Robustness Control Strategy for Shearer Height Adjusting System

    Xiuping Su

    2013-09-01

    Full Text Available This paper firstly established mathematical model of height adjusting hydro cylinder of the shearer, as well as the state space equation of the shearer height adjusting system. Secondly we designed a shearer automatic height adjusting controller adopting the sliding mode robustness control strategy. The height adjusting controller includes the sliding mode surface switching function based on Ackermann formula, as well as sliding mode control function with the improved butterworth filter. Then simulation of the height adjustment controller shows that the sliding mode robustness control solves buffeting of typical controller, and achieves automatic control for the rolling drum of the shearer.

  4. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Doo Yong Choi; Seong-Won Kim; Min-Ah Choi; Zong Woo Geem

    2016-01-01

    Rapid detection of bursts and leaks in water distribution systems (WDSs) can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA) systems and the establishment of district meter areas (DMAs). Nonetheless, no consideration has been given to how frequen...

  5. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Casoli Pierre

    2016-01-01

    Full Text Available CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  6. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  7. Is an adjusted rhizosphere-based method valid for field assessment of metal phytoavailability? Application to non-contaminated soils

    Previously recommended rhizosphere-based method (RHIZO) applied to moist rhizosphere soils was integrated with moist bulk soils, and termed adjusted-RHIZO method (A-RHIZO). The A-RHIZO and RHIZO methods were systematically compared with EDTA, DTPA, CaCl2 and the first step of the Community Bureau of Reference (BCR1) methods for assessing metal phytoavailability under field conditions. Results suggested that moist bulk soils are equally suited or even better than rhizosphere soils to estimate metal phytoavailability. The A-RHIZO method was preferred to other methods for predicting the phytoavailability of Ni, Cu, Zn, Cd, Pb and Mn to wheat roots with correlation coefficients of 0.730 (P < 0.001), 0.854 (P < 0.001), 0.887 (P < 0.001), 0.739 (P < 0.001), 0.725 (P < 0.001) and 0.469 (P < 0.05), respectively. When including soil properties, other extraction methods were also able to predict phytoavailability reasonably well for some metals. Soil pH, organic matter and Fe-Mn oxide contents, and cation-exchange capacity mostly influenced the extraction and phytoavailability of metals. - An adjusted-RHIZO method was the most promising approach for predicting metal phytoavailability to wheat under field conditions

  8. Co-Registration Airborne LIDAR Point Cloud Data and Synchronous Digital Image Registration Based on Combined Adjustment

    Yang, Z. H.; Zhang, Y. S.; Zheng, T.; Lai, W. B.; Zou, Z. R.; Zou, B.

    2016-06-01

    Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels.

  9. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters...

  10. Comparison of different risk-adjustment models in assessing short-term surgical outcome after transthoracic esophagectomy in patients with esophageal cancer

    Bosch, D. J.; Pultrum, B.B.; de Bock, G H; Oosterhuis, J. K.; Rodgers, M. G. G.; Plukker, J.T.M.

    2011-01-01

    BACKGROUND: Different risk-prediction models have been developed, but none is generally accepted in selecting patients for esophagectomy. This study evaluated 5 most frequently used risk-prediction models, including the American Society of Anesthesiologists, Portsmouth-modified Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM), and the adjusted version for Oesophagogastric surgery (O-POSSUM), Charlson and the Age adjusted Charlson score to as...

  11. A Test of the Family Stress Model on Toddler-Aged Children’s Adjustment Among Hurricane Katrina Impacted and Nonimpacted Low-Income Families

    Scaramella, Laura V.; Sohr-Preston, Sara L.; Callahan, Kristin L.; Mirabile, Scott P.

    2008-01-01

    Hurricane Katrina dramatically altered the level of social and environmental stressors for the residents of the New Orleans area. The Family Stress Model describes a process whereby felt financial strain undermines parents’ mental health, the quality of family relationships, and child adjustment. Our study considered the extent to which the Family Stress Model explained toddler-aged adjustment among Hurricane Katrina affected and nonaffected families. Two groups of very low-income mothers and...

  12. LSTM based Conversation Models

    Luan, Yi; Ji, Yangfeng; Ostendorf, Mari

    2016-01-01

    In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations. Different architectures are explored for integrating participant role and context information into a Long Short-term Memory (LSTM) language model. The conversational model can function as a language model or a language generation model. Experiments on the Ubuntu Dialog Corpus show that our model can capture multiple turn interaction between participants. The propos...

  13. Construction project investment control model based on instant information

    WANG Xue-tong

    2006-01-01

    Change of construction conditions always influences project investment by causing the loss of construction work time and extending the duration. To resolve such problem as difficult dynamic control in work construction plan, this article presents a concept of instant optimization by ways of adjustment operation time of each working procedure to minimize investment change. Based on this concept, its mathematical model is established and a strict mathematical justification is performed. An instant optimization model takes advantage of instant information in the construction process to duly complete adjustment of construction; thus we maximize cost efficiency of project investment.

  14. A Model of Appropriate Self-Adjustment of Farmers who Grow Para Rubber (Hevea brasiliensis in Northeast Thailand

    Montri Srirajlao

    2010-01-01

    Full Text Available Problem statement: Para Rubber was an economic wood growing in Northeast Thailand playing economic and social role. The objectives of this research were to study: (1 the economic, social and cultural lifestyle and (2 the appropriate adjustment model of agriculturists or farmers growing Para Rubber in Northeast Thailand. Approach: The research area covered 6 provinces: Mahasarakam, Roi-ed, Khon Kaen, Nongkai, Udontani and Loei. The samples were selected by Purposive Sampling including: 90 experts, 60 practitioners and 60 general people. The instruments using for collecting data were: (1 The Interview Form, (2 The Observation Form, (3 Focus Group Discussion and (4 Workshop, investigated by Triangulation. Data were analyzed according to the specified objectives and presented in descriptive analysis. Results: The farmers' lifestyle in traditional period of Northeast Thailand was to earn their living from producing by themselves and sharing resources with each other including: rice farming, farm rice growing, vegetable garden growing, searching for natural food without cost wasting one's capital. When it was period of changing, the price of traditional industrial crop was lowered, the agriculturists began to grow Para Rubber instead since the promotion of governmental industrial section. For the economic, social and cultural changes, found that the agriculturists growing Para Rubber Plantation, had more revenue. But, the mechanism of market price and selling had stability was attached with political situation. For the pattern of adjustment of the agriculturists growing Para Rubber Plantation in Northeast Thailand, found that there was an adjustment in individual level for developing their self study by applying body of knowledge learned by experience of successful people by being employed in cutting Para Rubber in The Southern of Thailand as well as the academic support and selling to serve the need of farmers. Conclusion/Recommendations: Para Rubber

  15. Salary adjustments

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566

  16. Salary adjustments

    HR Department

    2008-01-01

    In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566

  17. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  18. Signal Amplification in Field Effect-Based Sandwich Enzyme-Linked Immunosensing by Tuned Buffer Concentration with Ionic Strength Adjuster.

    Kumar, Satyendra; Kumar, Narendra; Panda, Siddhartha

    2016-04-01

    Miniaturization of the sandwich enzyme-based immunosensor has several advantages but could result in lower signal strength due to lower enzyme loading. Hence, technologies for amplification of the signal are needed. Signal amplification in a field effect-based electrochemical immunosensor utilizing chip-based ELISA is presented in this work. First, the molarities of phosphate buffer saline (PBS) and concentrations of KCl as ionic strength adjuster were optimized to maximize the GOx glucose-based enzymatic reactions in a beaker for signal amplification measured by change in the voltage shift with an EIS device (using 20 μl of solution) and validated with a commercial pH meter (using 3 ml of solution). The PBS molarity of 100 μM with 25 mM KCl provided the maximum voltage shift. These optimized buffer conditions were further verified for GOx immobilized on silicon chips, and similar trends with decreased PBS molarity were obtained; however, the voltage shift values obtained on chip reaction were lower as compared to the reactions occurring in the beaker. The decreased voltage shift with immobilized enzyme on chip could be attributed to the increased Km (Michaelis-Menten constant) values in the immobilized GOx. Finally, a more than sixfold signal enhancement (from 8 to 47 mV) for the chip-based sandwich immunoassay was obtained by altering the PBS molarity from 10 to 100 μM with 25 mM KCl. PMID:26801818

  19. Adjustable collimator

    In a rotating fan beam tomographic scanner there is included an adjustable collimator and shutter assembly. The assembly includes a fan angle collimation cylinder having a plurality of different length slots through which the beam may pass for adjusting the fan angle of the beam. It also includes a beam thickness cylinder having a plurality of slots of different widths for adjusting the thickness of the beam. Further, some of the slots have filter materials mounted therein so that the operator may select from a plurality of filters. Also disclosed is a servo motor system which allows the operator to select the desired fan angle, beam thickness and filter from a remote location. An additional feature is a failsafe shutter assembly which includes a spring biased shutter cylinder mounted in the collimation cylinders. The servo motor control circuit checks several system conditions before the shutter is rendered openable. Further, the circuit cuts off the radiation if the shutter fails to open or close properly. A still further feature is a reference radiation intensity monitor which includes a tuning-fork shaped light conducting element having a scintillation crystal mounted on each tine. The monitor is placed adjacent the collimator between it and the source with the pair of crystals to either side of the fan beam

  20. From skin to bulk: An adjustment technique for assimilation of satellite-derived temperature observations in numerical models of small inland water bodies

    Javaheri, Amir; Babbar-Sebens, Meghna; Miller, Robert N.

    2016-06-01

    Data Assimilation (DA) has been proposed for multiple water resources studies that require rapid employment of incoming observations to update and improve accuracy of operational prediction models. The usefulness of DA approaches in assimilating water temperature observations from different types of monitoring technologies (e.g., remote sensing and in-situ sensors) into numerical models of in-land water bodies (e.g., lakes and reservoirs) has, however, received limited attention. In contrast to in-situ temperature sensors, remote sensing technologies (e.g., satellites) provide the benefit of collecting measurements with better X-Y spatial coverage. However, assimilating water temperature measurements from satellites can introduce biases in the updated numerical model of water bodies because the physical region represented by these measurements do not directly correspond with the numerical model's representation of the water column. This study proposes a novel approach to address this representation challenge by coupling a skin temperature adjustment technique based on available air and in-situ water temperature observations, with an ensemble Kalman filter based data assimilation technique. Additionally, the proposed approach used in this study for four-dimensional analysis of a reservoir provides reasonably accurate surface layer and water column temperature forecasts, in spite of the use of a fairly small ensemble. Application of the methodology on a test site - Eagle Creek Reservoir - in Central Indiana demonstrated that assimilation of remotely sensed skin temperature data using the proposed approach improved the overall root mean square difference between modeled surface layer temperatures and the adjusted remotely sensed skin temperature observations from 5.6°C to 0.51°C (i.e., 91% improvement). In addition, the overall error in the water column temperature predictions when compared with in-situ observations also decreased from 1.95°C (before assimilation

  1. Wavelet-Based Color Pathological Image Watermark through Dynamically Adjusting the Embedding Intensity

    Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman

    2012-01-01

    This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping. PMID:23243463

  2. Tunable fluorescence enhancement based on bandgap-adjustable 3D Fe3O4 nanoparticles

    Hu, Fei; Gao, Suning; Zhu, Lili; Liao, Fan; Yang, Lulu; Shao, Mingwang

    2016-06-01

    Great progress has been made in fluorescence-based detection utilizing solid state enhanced substrates in recent years. However, it is still difficult to achieve reliable substrates with tunable enhancement factors. The present work shows liquid fluorescence enhanced substrates consisting of suspensions of Fe3O4 nanoparticles (NPs), which can assemble 3D photonic crystal under the external magnetic field. The photonic bandgap induced by the equilibrium of attractive magnetic force and repulsive electrostatic force between adjacent Fe3O4 NPs is utilized to enhance fluorescence intensity of dye molecules (including R6G, RB, Cy5, DMTPS-DCV) in a reversible and controllable manner. The results show that a maximum of 12.3-fold fluorescence enhancement is realized in the 3D Fe3O4 NP substrates without the utilization of metal particles for PCs/DMTPS-DCV (1.0 × 10‑7 M, water fraction (f w) = 90%).

  3. Wavelet-Based Color Pathological Image Watermark through Dynamically Adjusting the Embedding Intensity

    Guoyan Liu

    2012-01-01

    Full Text Available This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT. The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n×n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping.

  4. Optimal adjustment of the atmospheric forcing parameters of ocean models using sea surface temperature data assimilation

    Meinvielle, M.; Brankart, J.-M.; Brasseur, P.; Barnier, B.; Dussin, R.; Verron, J.

    2013-10-01

    In ocean general circulation models, near-surface atmospheric variables used to specify the atmospheric boundary condition remain one of the main sources of error. The objective of this research is to constrain the surface forcing function of an ocean model by sea surface temperature (SST) data assimilation. For that purpose, a set of corrections for ERAinterim (hereafter ERAi) reanalysis data is estimated for the period of 1989-2007, using a sequential assimilation method, with ensemble experiments to evaluate the impact of uncertain atmospheric forcing on the ocean state. The control vector of the assimilation method is extended to atmospheric variables to obtain monthly mean parameter corrections by assimilating monthly SST and sea surface salinity (SSS) climatological data in a low resolution global configuration of the NEMO model. In this context, the careful determination of the prior probability distribution of the parameters is an important matter. This paper demonstrates the importance of isolating the impact of forcing errors in the model to perform relevant ensemble experiments. The results obtained for every month of the period between 1989 and 2007 show that the estimated parameters produce the same kind of impact on the SST as the analysis itself. The objective is then to evaluate the long-term time series of the forcing parameters focusing on trends and mean error corrections of air-sea fluxes. Our corrections tend to equilibrate the net heat-flux balance at the global scale (highly positive in ERAi database), and to remove the potentially unrealistic negative trend (leading to ocean cooling) in the ERAi net heat flux over the whole time period. More specifically in the intertropical band, we reduce the warm bias of ERAi data by mostly modifying the latent heat flux by wind speed intensification. Consistently, when used to force the model, the corrected parameters lead to a better agreement between the mean SST produced by the model and mean SST

  5. Optimal adjustment of the atmospheric forcing parameters of ocean models using sea surface temperature data assimilation

    M. Meinvielle

    2012-07-01

    Full Text Available In ocean general circulation models, near surface atmospheric variables used to specify the atmospheric remain one of the main sources of error. The objective of this research is to constrain the surface forcing function of an ocean model by Sea Surface Temperature (SST data assimilation. For that purpose, a set of corrections for ERAinterim (hereafter ERAi reanalysis data is estimated for the period from 1989 to 2007 using a sequential assimilation method, with ensemble experiments to evaluate the impact of uncertain atmospheric forcing on the ocean state. The control vector of the assimilation method is extended to atmospheric variables to obtain monthly mean parameter corrections by assimilating monthly SST and Sea Surface Salinity (SSS climatological data in a low resolution global configuration of the NEMO model. In this context, the careful determination of the prior probability distribution of the parameters is an important matter. This paper demonstrates the importance of isolating the impact of forcing errors in the model to perform relevant ensemble experiments.

    The results obtained for every month of the period between 1989 and 2007 show that the estimated parameters produce the same kind of impact on the SST as the analysis itself. The objective is then to evaluate the long term time-series of the forcing parameters focusing on trends and mean error corrections of air-sea fluxes. Our corrections tend to equilibrate the net heat flux balance at the global scale (highly positive in ERAi database, and to remove the potentially unrealistic negative trend (leading to ocean cooling in the ERAi net heat flux over the whole time period. More specifically in the intertropical band, we reduce the warm bias of ERAi data by mostly modifying the latent heat flux by wind velocity intensification. Consistently, when used to force the model, the corrected parameters lead to a better agreement between the mean SST produced by the model and

  6. Initial pre-stress finding procedure and structural performance research for Levy cable dome based on linear adjustment theory

    2007-01-01

    The cable-strut structural system is statically and kinematically indeterminate. The initial pre-stress is a key factor for determining the shape and load carrying capacity. A new numerical algorithm is presented herein for the initial pre-stress finding procedure of complete cable-strut assembly. This method is based on the linear adjustment theory and does not take into account the material behavior. By using this method, the initial pre-stress of the multi self-stress modes can be found easily and the calculation process is simplified and efficient also. Finally, the initial pre-stress and structural performances of a particular Levy cable dome are analyzed comprehensively. The algorithm has proven to be efficient and correct, and the numerical results are valuable for practical design of Levy cable dome.

  7. Technical Document for Price Adjustment

    Zheng Tian; Mulugeta Kahsai; Randall Jackson

    2014-01-01

    This document presents the basis for the price adjustment mechanisms in a time series IO model. The essentials of the price adjustment and price change propagation algorithms are presented, along with a matrix permutation algorithm that facilitates the implementation of the price adjustment mechanism. The Matlab function is provided.

  8. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  9. CFD modeling of a prismatic spouted bed with two adjustable gas inlets

    Gryczka, Oliver; Heinrich, Stefan; Deen, Niels G.; Sint Annaland, van Martin; Kuipers, Johannes A.M.; Mörl, Lothar

    2009-01-01

    Since the invention of the spouted bed technology by Mathur and Gishler (1955), different kinds of apparatus design were developed and a huge number of applications in nearly all branches of industry have emerged. Modeling of spouted beds by means of modern simulation tools, like discrete particle m

  10. Optimal adjustment of the atmospheric forcing parameters of ocean models using sea surface temperature data assimilation

    M. Meinvielle

    2013-10-01

    Full Text Available In ocean general circulation models, near-surface atmospheric variables used to specify the atmospheric boundary condition remain one of the main sources of error. The objective of this research is to constrain the surface forcing function of an ocean model by sea surface temperature (SST data assimilation. For that purpose, a set of corrections for ERAinterim (hereafter ERAi reanalysis data is estimated for the period of 1989–2007, using a sequential assimilation method, with ensemble experiments to evaluate the impact of uncertain atmospheric forcing on the ocean state. The control vector of the assimilation method is extended to atmospheric variables to obtain monthly mean parameter corrections by assimilating monthly SST and sea surface salinity (SSS climatological data in a low resolution global configuration of the NEMO model. In this context, the careful determination of the prior probability distribution of the parameters is an important matter. This paper demonstrates the importance of isolating the impact of forcing errors in the model to perform relevant ensemble experiments. The results obtained for every month of the period between 1989 and 2007 show that the estimated parameters produce the same kind of impact on the SST as the analysis itself. The objective is then to evaluate the long-term time series of the forcing parameters focusing on trends and mean error corrections of air–sea fluxes. Our corrections tend to equilibrate the net heat-flux balance at the global scale (highly positive in ERAi database, and to remove the potentially unrealistic negative trend (leading to ocean cooling in the ERAi net heat flux over the whole time period. More specifically in the intertropical band, we reduce the warm bias of ERAi data by mostly modifying the latent heat flux by wind speed intensification. Consistently, when used to force the model, the corrected parameters lead to a better agreement between the mean SST produced by the

  11. An empirically adjusted approach to reproductive number estimation for stochastic compartmental models: A case study of two Ebola outbreaks.

    Brown, Grant D; Oleson, Jacob J; Porter, Aaron T

    2016-06-01

    The various thresholding quantities grouped under the "Basic Reproductive Number" umbrella are often confused, but represent distinct approaches to estimating epidemic spread potential, and address different modeling needs. Here, we contrast several common reproduction measures applied to stochastic compartmental models, and introduce a new quantity dubbed the "empirically adjusted reproductive number" with several advantages. These include: more complete use of the underlying compartmental dynamics than common alternatives, use as a potential diagnostic tool to detect the presence and causes of intensity process underfitting, and the ability to provide timely feedback on disease spread. Conceptual connections between traditional reproduction measures and our approach are explored, and the behavior of our method is examined under simulation. Two illustrative examples are developed: First, the single location applications of our method are established using data from the 1995 Ebola outbreak in the Democratic Republic of the Congo and a traditional stochastic SEIR model. Second, a spatial formulation of this technique is explored in the context of the ongoing Ebola outbreak in West Africa with particular emphasis on potential use in model selection, diagnosis, and the resulting applications to estimation and prediction. Both analyses are placed in the context of a newly developed spatial analogue of the traditional SEIR modeling approach. PMID:26574727

  12. Modeling of multivariate longitudinal phenotypes in family genetic studies with Bayesian multiplicity adjustment

    Ding, Lili; Kurowski, Brad G.; He, Hua; Alexander, Eileen S.; Mersha, Tesfaye B.; Fardo, David W.; Zhang, Xue; Pilipenko, Valentina V; Kottyan, Leah; Martin, Lisa J.

    2014-01-01

    Genetic studies often collect data on multiple traits. Most genetic association analyses, however, consider traits separately and ignore potential correlation among traits, partially because of difficulties in statistical modeling of multivariate outcomes. When multiple traits are measured in a pedigree longitudinally, additional challenges arise because in addition to correlation between traits, a trait is often correlated with its own measures over time and with measurements of other family...

  13. All-glass shell scale models made with an adjustable mould

    Belis, JLIF Jan; Pronk, ADC Arno; Schuurmans, WB; Blancke, T

    2011-01-01

    Ever since Lucio Blandini developed a doubly curved synclastic shell with adhesively bonded glass components, the concept of building a self-supporting glass-only shell has almost become within reach. In the current contribution a small-scaled experimental concept is presented of a self-supporting anticlastic all-glass shell scale model, created by means of an adaptable mould. First, different manufacturing parameters of relatively small shells are investigated, such as mould type, glass s...

  14. Models of Quality-Adjusted Life Years when Health varies over Time: Survey and Analysis

    Hansen, Kristian Schultz; Østerdal, Lars Peter

    2006-01-01

    time trade-off (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. This discussion includes questioning the SG method as the gold standard of the health state index, re-examining the role of...... the constant-proportional trade-off condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination of these two can be used to disentangle risk aversion from discounting. We find that caution must be...

  15. Models of quality-adjusted life years when health varies over time

    Hansen, Kristian Schultz; Østerdal, Lars Peter

    2006-01-01

    time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... the SG method as the gold standard for estimation of the health state index, reexamining the role of the constantproportional tradeoff condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination...

  16. Devising a model brand loyalty in tires industry: the adjustment role of customer perceived value

    Davoud Feiz

    2015-06-01

    Full Text Available Today, brand discussion is highly considered by companies and market agents. Different factors such as customers’ loyalty to brand impact on brand and the increase in sale and profit. Present paper aims at studying the impact of brand experience, trust and satisfaction on brand loyalty to Barez Tire Company in the city of Kerman as well as providing a model for this case. Research population consists of all Barez Tire consumers in Kerman. The volume of the sample was 171 for which simple random sampling was used. Data collection tool was a standard questionnaire and for measuring its reliability, Chronbach’s alpha was used. Present research is an applied one in terms of purpose and it is a descriptive and correlative one in terms of acquiring needed data. To analyze data, confirmatory factor analysis (CFA and structural equation model (SEM in SPSS and LISREL software were used. The findings indicate that brand experience, brand trust, and brand satisfaction impact on brand loyalty to Barez Tire Brand in the city of Kerman significantly. Noteworthy, the impact of these factors is higher when considering the role of the perceived value moderator.

  17. A Unified Framework for Quasi-Linear Bundle Adjustment

    Bartoli, Adrien

    2002-01-01

    Obtaining 3D models from long image sequences is a major issue in computer vision. One of the main tools used to obtain accurate structure and motion estimates is bundle adjustment. Bundle adjustment is usually performed using nonlinear Newton-type optimizers such as Levenberg-Marquardt which might be quite slow when handling a large number of points or views. We investigate an algorithm for bundle adjustment based on quasi-linear optimization. The method is straightforward to implement and r...

  18. Model-Based Reasoning

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  19. Fast Cloud Adjustment to Increasing CO2 in a Superparameterized Climate Model

    Marat Khairoutdinov

    2012-05-01

    Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.

  20. Casemix-based funding of Northern Territory public hospitals: adjusting for severity and socio-economic variations.

    Beaver, C; Zhao, Y; McDermid, S; Hindle, D

    1998-02-01

    The Northern Territory intends to make use of Australian National Diagnosis Related Groups (DRGs) and their cost relativities as the basis for the allocation of budgets among public hospitals. The study reported here attempted to assess the extent to which there are variations in severity of illness and socio-economic status which are not adequately explained by DRG alone and, if so, to develop a DRG payment adjustment index by use of routinely available data items. The investigation was undertaken by use of a database containing all discharges between July 1992 and June 1995. Hospital length of stay was used as a proxy for cost. Multivariate analysis was undertaken and it was found that several variables were associated with cost variations within DRGs. Stepwise multiple linear regression was used to develop a model in which 14 variables were able to explain 45% of the variations. Index values were subsequently computed from the regression model for each of eight categories of admitted patient episodes which are the intersections of three binary variables: Aborigine or non-Aborigine, rural or urban usual place of residence of the patient and hospital type (teaching or other). It is intended that these index values will be used to compute differential funding rates for each hospital in the Territory. PMID:9541084

  1. Model-based software design

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  2. Weather Radar Adjustment Using Runoff from Urban Surfaces

    Ahm, Malte; Rasmussen, Michael Robdrup

    2016-01-01

    Weather radar data used for urban drainage applications are traditionally adjusted to point ground references, e.g., rain gauges. However, the available rain gauge density for the adjustment is often low, which may lead to significant representativeness errors. Yet, in many urban catchments......, rainfall is often measured indirectly through runoff sensors. This paper presents a method for weather radar adjustment on the basis of runoff observations (Z-Q adjustment) as an alternative to the traditional Z-R adjustment on the basis of rain gauges. Data from a new monitoring station in Aalborg......, Denmark, were used to evaluate the flow-based weather radar adjustment method against the traditional rain-gauge adjustment. The evaluation was performed by comparing radar-modeled runoff to observed runoff. The methodology was both tested on an events basis and multiple events combined. The results...

  3. Model-based Software Engineering

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  4. Principles of models based engineering

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  5. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  6. Exploration of Loggerhead Shrike Habitats in Grassland National Park of Canada Based on in Situ Measurements and Satellite-Derived Adjusted Transformed Soil-Adjusted Vegetation Index (ATSAVI

    Li Shen

    2013-01-01

    Full Text Available The population of loggerhead shrike (Lanius ludovicianus excubutirudes in Grassland National Park of Canada (GNPC has undergone a severe decline due to habitat loss and limitation. Shrike habitat availability is highly impacted by the biophysical characteristics of grassland landscapes. This study was conducted in the west block of GNPC. The overall purpose was to extract important biophysical and topographical variables from both SPOT satellite imagery and in situ measurements. Statistical analysis including Analysis of Variance (ANOVA, measuring Coefficient Variation (CV, and regression analysis were applied to these variables obtained from both imagery and in situ measurement. Vegetation spatial variation and heterogeneity among active, inactive and control nesting sites at 20 m × 20 m, 60 m × 60 m and 100 m × 100 m scales were investigated. Results indicated that shrikes prefer to nest in open areas with scattered shrubs, particularly thick or thorny species of smaller size, to discourage mammalian predators. The most important topographical characteristic is that active sites are located far away from roads at higher elevation. Vegetation index was identified as a good indicator of vegetation characteristics for shrike habitats due to its significant relation to most relevant biophysical factors. Spatial variation analysis showed that at all spatial scales, active sites have the lowest vegetation abundance and the highest heterogeneity among the three types of nesting sites. For all shrike habitat types, vegetation abundance decreases with increasing spatial scales while habitat heterogeneity increases with increasing spatial scales. This research also indicated that suitable shrike habitat for GNPC can be mapped using a logistical model with ATSAVI and dead material in shrub canopy as the independent variables.

  7. Parental Perceptions of Family Adjustment in Childhood Developmental Disabilities

    Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry

    2013-01-01

    Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…

  8. The Impact of Paradoxical Comorbidities on Risk-Adjusted...

    U.S. Department of Health & Human Services — Persistent uncertainty remains regarding assessments of patient comorbidity based on administrative data for mortality risk adjustment. Some models include comorbid...

  9. Gradient-based adaptation of continuous dynamic model structures

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  10. 面向Web零件库的可拓关联搜索%Adjustable relevance search in Web-based parts library

    顾复; 张树有

    2011-01-01

    To solve the data heterogeneity and information immensity problems in the web-based parts libraries, Adjustable Relevance Search (ARS) oriented to web-based parts library was put forward. Resource Description Framework Schema(RDFS) was used to construct ontology model for parts' information resources as nodes. Nodes were connected with each other by various semantic relationships to form semantic-Web so as to realize extended relevance search. Based on this semantic-Web model, the working process and algorithm of ARS were put forward and explained. The feasibility and practicality of ARS in Web-based parts library based on semantic-network was demonstrated by a simple programming example designed for injection molding machine.%针对由于Web零件库信息量大、数据的异构性强而出现的问题,提出面向Web零件库的可拓关联搜索.用资源描述框架主题建立零部件信息资源的本体模型,并作为语义网络中的节点;通过语义关系将各节点连接成一个语义网络,进而实现扩展关联搜索.基于该语义网络模型,提出了可拓关联搜索的流程与算法.通过针对注塑机零部件的Web零件库编程实例,验证了可拓关联搜索在Web零件库中的可行性与实用性.

  11. Graph Model Based Indoor Tracking

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  12. Macroeconomic Fluctuations and Propagation Mechanisms: An Agent-Based Simulation Model

    Sella Lisa

    2009-01-01

    This paper proposes an agent-based simulation model exploring aggregate business fluctuations in an artificial market economy. It is inspired by the C++ agent-based simulation model in Howitt (2006), and proposes a modified NetLogo model, which provides new procedures and parameters aiming at analyzing the endogenous dynamics of market adjustment processes

  13. Cluster Based Text Classification Model

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases the...... classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  14. Concept Tree Based Information Retrieval Model

    Chunyan Yuan

    2014-05-01

    Full Text Available This paper proposes a novel concept-based query expansion technique named Markov concept tree model (MCTM, discovering term relationship through the concept tree deduced by term markov network. We address two important issues for query expansion: the selection and the weighting of expansion search terms. In contrast to earlier methods, queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to a signal query terms. Utilizing Markov network which is constructed according to the co-occurrence information of the terms in collection, it generate concept tree for each original query term, remove the redundant and irrelevant nodes in concept tree, then adjust the weight of original query and the weight of expansion term based on a pruning algorithm. We use this model for query expansion and evaluate the effectiveness of the model by examining the accuracy and robustness of the expansion methods, Compared with the baseline model, the experiments on standard dataset reveal that this method can achieve a better query quality

  15. Adjusting Monetary Measures of Poverty to Non-Monetary Aspects: An Analysis Based on Sri Lankan Data

    Weerahewa, Jeevika; Wickramasinghe, Kanchana

    2005-01-01

    This paper reassesses the status of poverty in Sri Lanka using a monetary measure which was adjusted for people's perceptions about the social climate. Data collected by the Sri Lanka Integrated Survey was used to obtain incidences of poverty using cost of basic need (CBN) poverty lines and poverty lines adjusted for people's perceptions. The results reveal that the poverty measurements significantly differ with the two approaches though poverty ranking remains more or less consistent.

  16. Energetics of geostrophic adjustment in rotating flow

    Juan, Fang; Rongsheng, Wu

    2002-09-01

    Energetics of geostrophic adjustment in rotating flow is examined in detail with a linear shallow water model. The initial unbalanced flow considered first falls tinder two classes. The first is similar to that adopted by Gill and is here referred to as a mass imbalance model, for the flow is initially motionless but with a sea surface displacement. The other is the same as that considered by Rossby and is referred to as a momentum imbalance model since there is only a velocity perturbation in the initial field. The significant feature of the energetics of geostrophic adjustment for the above two extreme models is that although the energy conversion ratio has a large case-to-case variability for different initial conditions, its value is bounded below by 0 and above by 1 / 2. Based on the discussion of the above extreme models, the energetics of adjustment for an arbitrary initial condition is investigated. It is found that the characteristics of the energetics of geostrophic adjustment mentioned above are also applicable to adjustment of the general unbalanced flow under the condition that the energy conversion ratio is redefined as the conversion ratio between the change of kinetic energy and potential energy of the deviational fields.

  17. Risk Adjusted Deposit Insurance for Japanese Banks

    Ryuzo Sato; Rama V. Ramachandran; Bohyong Kang

    1990-01-01

    The purpose of this paper is to evaluate the Japanese deposit insurance scheme by contrasting the flat insurance rate with a market-determined risk-adjusted rate. The model used to calculate the risk-adjusted rate is that of Ronn and Verrna (1986) . It utilizes the notion of Merton(1977) that the deposit insurance can be based on a one-to-one relation between it and the put option; this permits the application of Black and Scholes(1973) model for the calculation of the insurance rate. The ris...

  18. Realistic PIC modelling of laser-plasma interaction: a direct implicit method with adjustable damping and high order weight functions

    This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction

  19. KNOWLEDGE BASED APPROACH TO SOFTWARE DEVELOPMENT PROCESS MODELING

    Jan Kozusznik; Svatopluk Stolfa

    2011-01-01

    Modeling a software process is one way a can company decide which software process and/or its adjustment is the best solution for the current project. Modeling is the way the process is presented or simulated. Since there are many different approaches to modeling and all of them have pros and cons, the very first task is the selection of an appropriate and useful modeling approach for the current goal and selected conditions. In this paper, we propose an approach based on ontologies.

  20. KNOWLEDGE BASED APPROACH TO SOFTWARE DEVELOPMENT PROCESS MODELING

    Jan Kozusznik

    2011-01-01

    Full Text Available Modeling a software process is one way a can company decide which software process and/or its adjustment is the best solution for the current project. Modeling is the way the process is presented or simulated. Since there are many different approaches to modeling and all of them have pros and cons, the very first task is the selection of an appropriate and useful modeling approach for the current goal and selected conditions. In this paper, we propose an approach based on ontologies.

  1. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Hojin Moon; Hongshik Ahn; Kodell, Ralph L

    2006-01-01

    A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988) is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the...

  2. Base Flow Model Validation Project

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of...

  3. Consequences of spatial autocorrelation for niche-based models

    Segurado, P.; Araújo, Miguel B.; Kunin, W. E.

    2006-01-01

    of significance based on randomizations were obtained. 3.  Spatial autocorrelation was shown to represent a serious problem for niche-based species' distribution models. Significance values were found to be inflated up to 90-fold. 4.  In general, GAM and CTA performed better than GLM, although all three methods...... autocorrelated variables, these need to be adjusted. The reliability and value of niche-based distribution models for management and other applied ecology purposes can be improved if certain techniques and procedures, such as the null model approach recommended in this study, are implemented during the model......1.  Spatial autocorrelation is an important source of bias in most spatial analyses. We explored the bias introduced by spatial autocorrelation on the explanatory and predictive power of species' distribution models, and make recommendations for dealing with the problem. 2.  Analyses were based...

  4. Modeling Guru: Knowledge Base for NASA Modelers

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  5. Holocene sea-level changes along the North Carolina Coastline and their implications for glacial isostatic adjustment models

    Horton, B.P.; Peltier, W.R.; Culver, S.J.; Drummond, R.; Engelhart, S.E.; Kemp, A.C.; Mallinson, D.; Thieler, E.R.; Riggs, S.R.; Ames, D.V.; Thomson, K.H.

    2009-01-01

    We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ?? 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ?? 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ?? 0.03 mm year-1 and 0.82 ?? 0.02 mm year-1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL

  6. Analysis of ISO NE Balancing Requirements: Uncertainty-based Secure Ranges for ISO New England Dynamic Inerchange Adjustments

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di; Hou, Zhangshuan; Sun, Yannan; Maslennikov, S.; Luo, X.; Zheng, T.; George, S.; Knowland, T.; Litvinov, E.; Weaver, S.; Sanchez, E.

    2013-01-31

    The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.

  7. Adjustments in the Almod 3W2 code models for reproducing the net load trip test in Angra I nuclear power plant

    The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author)

  8. Event-Based Activity Modeling

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...

  9. Modelling Gesture Based Ubiquitous Applications

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  10. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  11. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  12. A Computational Tool for Testing Dose-related Trend Using an Age-adjusted Bootstrap-based Poly-k Test

    Hojin Moon

    2006-08-01

    Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.

  13. Content-based network model with duplication and divergence

    Şengün, Yasemin; Erzan, Ayşe

    2006-06-01

    We construct a minimal content-based realization of the duplication and divergence model of genomic networks introduced by Wagner [Proc. Natl. Acad. Sci. 91 (1994) 4387] and investigate the scaling properties of the directed degree distribution and clustering coefficient. We find that the content-based network exhibits crossover between two scaling regimes, with log-periodic oscillations for large degrees. These features are not present in the original gene duplication model, but inherent in the content-based model of Balcan and Erzan. The scaling form of the degree distribution of the content-based model turns out to be robust under duplication and divergence, with some re-adjustment of the scaling exponents, while the out-clustering coefficient goes over from a weak power-law dependence on the degree, to an exponential decay under mutations which include splitting and merging of strings.

  14. STL Triangular Mesh Generation Based on SAT Model

    Yuwei Zhang

    2013-06-01

    Full Text Available Mesh generation is a fundamental technique in multiple domains. In this study, a STL triangular mesh generation method based on SAT model is proposed. Two novel triangulation methods, the constrained Delaunay algorithm and the grid subtraction algorithm, are employed on the multi-loop planer regions and the curved surfaces respectively. For the use of node adjustment, the mesh nodes on the surface boundary are strictly matched, with no cracks created on the joint of model surfaces. Experiments show that the proposed solution works effectively and high quality of the mesh model is achieved.

  15. Adjustment computations spatial data analysis

    Ghilani, Charles D

    2011-01-01

    the complete guide to adjusting for measurement error-expanded and updated no measurement is ever exact. Adjustment Computations updates a classic, definitive text on surveying with the latest methodologies and tools for analyzing and adjusting errors with a focus on least squares adjustments, the most rigorous methodology available and the one on which accuracy standards for surveys are based. This extensively updated Fifth Edition shares new information on advances in modern software and GNSS-acquired data. Expanded sections offer a greater amount of computable problems and their worked solu

  16. Similar words analysis based on POS-CBOW language model

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  17. Sketch-based geologic modeling

    Rood, M. P.; Jackson, M.; Hampson, G.; Brazil, E. V.; de Carvalho, F.; Coda, C.; Sousa, M. C.; Zhang, Z.; Geiger, S.

    2015-12-01

    Two-dimensional (2D) maps and cross-sections, and 3D conceptual models, are fundamental tools for understanding, communicating and modeling geology. Yet geologists lack dedicated and intuitive tools that allow rapid creation of such figures and models. Standard drawing packages produce only 2D figures that are not suitable for quantitative analysis. Geologic modeling packages can produce 3D models and are widely used in the groundwater and petroleum communities, but are often slow and non-intuitive to use, requiring the creation of a grid early in the modeling workflow and the use of geostatistical methods to populate the grid blocks with geologic information. We present an alternative approach to rapidly create figures and models using sketch-based interface and modelling (SBIM). We leverage methods widely adopted in other industries to prototype complex geometries and designs. The SBIM tool contains built-in geologic rules that constrain how sketched lines and surfaces interact. These rules are based on the logic of superposition and cross-cutting relationships that follow from rock-forming processes, including deposition, deformation, intrusion and modification by diagenesis or metamorphism. The approach allows rapid creation of multiple, geologically realistic, figures and models in 2D and 3D using a simple, intuitive interface. The user can sketch in plan- or cross-section view. Geologic rules are used to extrapolate sketched lines in real time to create 3D surfaces. Quantitative analysis can be carried our directly on the models. Alternatively, they can be output as simple figures or imported directly into other modeling tools. The software runs on a tablet PC and can be used in a variety of settings including the office, classroom and field. The speed and ease of use of SBIM enables multiple interpretations to be developed from limited data, uncertainty to be readily appraised, and figures and models to be rapidly updated to incorporate new data or concepts.

  18. How Households Adjust their Consumption and Investment Plans under Longevity Risk: An Experimental Approach-based Study in Taiwan

    Joseph J Tien; Jerry C Y Miao

    2013-01-01

    Longevity risk may be defined as the risk of outliving one’s accumulated wealth. Although many theoretical studies have suggested that individuals will increase their precautionary saving in order to mitigate longevity risk, only a few of such studies have used empirical data to test people’s decision-making behaviour in response to longevity risk. The main purpose of this paper is to investigate how households adjust their consumption and investment plans in response to longevity risk. We fi...

  19. LSL: a logarithmic least-squares adjustment method

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  20. HMM-based Trust Model

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope with thei...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  1. Model-based design approach to reducing mechanical vibrations

    P. Czop

    2013-09-01

    Full Text Available Purpose: The paper presents a sensitivity analysis method based on a first-principle model in order to reduce mechanical vibrations of a hydraulic damper. Design/methodology/approach: The first-principle model is formulated using a system of continuous ordinary differential equations capturing usually nonlinear relations among variables of the hydraulic damper model. The model applies three categories of parameters: geometrical, physical and phenomenological. Geometrical and physical parameters are deduced from construction and operational documentation. The phenomenological parameters are the adjustable ones, which are estimated or adjusted based on their roughly known values, e.g. friction/damping coefficients. Findings: The sensitivity analysis method provides major contributors and their magnitude that cause vibrations Research limitations/implications: The method accuracy is limited by the model accuracy and inherited nonlinear effects. Practical implications: The proposed model-based sensitivity method can be used to optimize prototypes of hydraulic dampers. Originality/value: The proposed sensitivity-analysis method minimizes a risk that a hydraulic damper does not meet the customer specification.

  2. Image segmentation based on adaptive mixture model

    As an important research field, image segmentation has attracted considerable attention. The classical geodesic active contour (GAC) model tends to produce fake edges in smooth regions, while the Chan–Vese (CV) model cannot effectively detect images with holes and obtain the precise boundary. To address the above issues, this paper proposes an adaptive mixture model synthesizing the GAC model and the CV model by a weight function. According to image characteristics, the proposed model can adaptively adjust the weight function. In this way, the model exploits the advantages of the GAC model in regions with rich textures or edges, while exploiting the advantages of the CV model in smooth local regions. Moreover, the proposed model is extended to vector-valued images. Through experiments, it is verified that the proposed model obtains better results than the traditional models. (paper)

  3. Application of Fuzzy Self-Optimizing Control Based on Differential Evolution Algorithm for the Ratio of Wind to Coal Adjustment of Boiler in the Thermal Power Plant

    Ting Hou; Liping Zhang; Yuchen Chen

    2014-01-01

    The types of coal are multiplex in domestic small and medium sized boilers, and with unstable ingredients, the method of maintaining the amount of wind and coal supply in a fixed proportion of the wind adjustment does not always ensure the best economical boiler combustion process, the key of optimizing combustion is to modify reasonable proportion of wind and coal online. In this paper, a kind of fuzzy self-optimizing control based on differential evolution algorithm is proposed, which appli...

  4. A model-based display

    A model-based display is identified, discussed, and illustrated. The model used in the display is based upon the Rankine Cycle, a heat engine cycle. Plant process data from the loss of main and auxiliary feedwater event at the Davis-Besse Plant on June 9, l985 is used to illustrate the display. The model used in the display fuses individual process variables into process functions. It also serves as a medium to communicate status of the process to human users. The human users may evaluate the goals of operation from the displayed process functions. Because of these display features, the user's cognitive workload is minimized. The opinions expressed herein are the author's personal ones and do not necessarily reflect criteria, requirements, and guidelines of the U.S. Nuclear Regulatory Commission

  5. Model-based sensor diagnosis

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  6. Adjustments and Depression

    Full Text Available ... to my SCI? How do I deal with depression and adjustment to my SCI? ☷ ▾ Page contents The ... the moment you are injured. Understanding adjustment and depression Adjustment to paralysis is a process of changing ...

  7. Modeling and identification for the adjustable control of generation processes; Modelado e identificacion para el control autoajustable de procesos de generacion

    Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.

  8. Adjustment of Land Use Structure Based on County-wide Economic Development: A Case Study of Changfeng County

    Zeping; HE; Zhongxiang; YU

    2015-01-01

    On the basis of survey of current land use situation in Changfeng County,this paper analyzed land use structure,trend and environment of structural adjustment. Besides,it explored county-wide land use structure mode,namely, " division + indicator + policy". Through reasonable distribution,it is expected to build proper land use structure framework,avoid passive situation of relying on policies simply,and bring into play stability keeping function of land use structure,so as to promote county-wide economic development,and bring into play countywide lever function of rural transition areas.

  9. Dynamic effects of anticipated and temporary tax changes in a R&D-based growth model

    Takao, Kizuku

    2014-01-01

    Tax changes are often announced before the implementations and are not permanent but only temporary. R&D firms will optimally adjust their investment decision to a tax schedule accordingly. This paper analyzes how anticipated and temporary tax changes dynamically affect the innovation activities. For the purpose, we consider adjustment costs for the investment process and allow firms to make a forward looking investment decision in the framework of an R&D-based endogenous growth model. Calibr...

  10. An Improved ZMP-Based CPG Model of Bipedal Robot Walking Searched by SaDE

    Yu, H. F.; Fung, E. H. K.; Jing, X. J.

    2014-01-01

    This paper proposed a method to improve the walking behavior of bipedal robot with adjustable step length. Objectives of this paper are threefold. (1) Genetic Algorithm Optimized Fourier Series Formulation (GAOFSF) is modified to improve its performance. (2) Self-adaptive Differential Evolutionary Algorithm (SaDE) is applied to search feasible walking gait. (3) An efficient method is proposed for adjusting step length based on the modified central pattern generator (CPG) model. The GAOFSF is ...

  11. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min. PMID:26396432

  12. Human physiologically based pharmacokinetic model for propofol

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  13. Bayes linear covariance matrix adjustment

    Wilkinson, Darren J

    1995-01-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be a...

  14. Model-based requirements engineering

    Holt, Jon

    2012-01-01

    This book provides a hands-on introduction to model-based requirementsengineering and management by describing a set of views that form the basisfor the approach. These views take into account each individual requirement interms of its description, but then also provide each requirement with meaning byputting it into the correct 'context'. A requirement that has been put into a contextis known as a 'use case' and may be based upon either stakeholders or levelsof hierarchy in a system. Each use case must then be analysed and validated bydefining a combination of scenarios and formal mathematica

  15. The FiR 1 photon beam model adjustment according to in-air spectrum measurements with the Mg(Ar) ionization chamber

    The mixed neutron–photon beam of FiR 1 reactor is used for boron–neutron capture therapy (BNCT) in Finland. A beam model has been defined for patient treatment planning and dosimetric calculations. The neutron beam model has been validated with an activation foil measurements. The photon beam model has not been thoroughly validated against measurements, due to the fact that the beam photon dose rate is low, at most only 2% of the total weighted patient dose at FiR 1. However, improvement of the photon dose detection accuracy is worthwhile, since the beam photon dose is of concern in the beam dosimetry. In this study, we have performed ionization chamber measurements with multiple build-up caps of different thickness to adjust the calculated photon spectrum of a FiR 1 beam model. - Highlights: • In order to verify the photon beam model of the FiR 1 BNCT facility in Finland, the photon beam spectrum is determined experimentally through ionization chamber measurements with multiple build-up caps of different thickness. • Large discrepancy is found between free in air measurements with the ionization chamber and the Monte Carlo calculations indicating that the initial beam model needs to be adjusted. • The unfolded spectrum and photon beam intensity may be applicable as a more correct photon beam model

  16. Model-based tomographic reconstruction

    Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  17. Differential Geometry Based Multiscale Models

    Wei, Guo-Wei

    2010-01-01

    Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descript...

  18. Initial Pre-stress Finding and Structural Behaviors Analysis of Cable Net Based on Linear Adjustment Theory

    REN Tao; CHEN Wu-jun; FU Gong-yi

    2008-01-01

    The tensile cable-strut structure is a self-equilibrate pre-stressed system. The initial pre-stress calculation is the fundamental structural analysis. A new numerical procedure was developed. The force density method is the cornerstone of analytical formula, and then introduced into linear adjustment theory;the least square least norm solution, the optimized initial pre-stress, is yielded. The initial pre-stress and structural performances of a particular single-layer saddle-shaped cable-net structure were analyzed with the developed method, which is proved to be efficient and correct. The modal analyses were performed with respect to various pre-stress levels. Finally, the structural performances were investigated comprehensively.

  19. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I

    Sit B.

    2009-08-01

    Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.

  20. MODELING OF THE HEAT PUMP STATION ADJUSTABLE LOOP OF AN INTERMEDIATE HEAT-TRANSFER AGENT (Part I)

    Sit B.

    2009-01-01

    There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending ...

  1. Zone modeling of radiative heat transfer in industrial furnaces using adjusted Monte-Carlo integral method for direct exchange area calculation

    This paper proposes the Monte-Carlo Integral method for the direct exchange area calculation in the zone method for the first time. This method is simple and able to handle the complex geometry zone problem and the self-zone radiation problem. The Monte-Carlo Integral method is adjusted to improve the efficiency, so that an acceptable accuracy within a reasonable computation time could be achieved. The zone method with the adjusted Monte-Carlo Integral method is used for the modeling and simulation of the radiation transfer in the industrial furnace. The simulation result is compared with the industrial data and show great accordance. It also shows the high temperature flue gas heats the furnace wall, which reflects the radiant heat to the reactor tubes. The highest temperature of flue gas and the side wall appears in nearly one third of the furnace height from the bottom, which corresponds with the industrial measuring data. The simulation result indicates that the zone method is comprehensive and easy to implement for radiative phenomenon in the furnace. - Highlights: • The Monte Carlo Integral method for evaluating direct exchange areas. • Adjustment from the MCI method to the AMCI method for efficiency. • Examination of the performance of the MCI and AMCI methods. • Development of the 3D zone model with the AMCI method. • The simulation results show good accordance with the industrial data

  2. Accrual valuation and mark to market adjustment

    Bakshaev, Alexey

    2016-01-01

    This paper provides intuition on the relationship of accrual and mark-to-market valuation for cash and forward interest rate trades. Discounted cashflow valuation is compared to spread-based valuation for forward trades, which explains the trader's view on valuation. This is followed by Taylor series approximation for cash trades, uncovering simple intuition behind accrual valuation and mark-to-market adjustment. It is followed by the PNL example modelled in R. Within the Taylor approximation...

  3. Psychosocial adjustment to ALS: a longitudinal study

    Tamara eMatuz

    2015-09-01

    Full Text Available For the current study the Lazarian stress-coping theory and the appendant model of psychosocial adjustment to chronic illness and disabilities (Pakenham 1999 has shaped the foundation for identifying determinants of adjustment to ALS. We aimed to investigate the evolution of psychosocial adjustment to ALS and to determine its long-term predictors. A longitudinal study design with four measurement time points was therefore, used to assess patients’ quality of life, depression, and stress-coping model related aspects, such as illness characteristics, social support, cognitive appraisals and coping strategies during a period of two years. Regression analyses revealed that 55% of the variance of severity of depressive symptoms and 47% of the variance in quality of life at T2 was accounted for by all the T1 predictor variables taken together. On the level of individual contributions, protective buffering and appraisal of own coping potential accounted for a significant percentage in the variance in severity of depressive symptoms, whereas problem management coping strategies explained variance in quality of life scores. Illness characteristics at T2 did not explain any variance of both adjustment outcomes. Overall, the pattern of the longitudinal results indicated stable depressive symptoms and quality of life indices reflecting a successful adjustment to the disease across four measurement time points during a period of about two years.Empirical evidence is provided for the predictive value of social support, cognitive appraisals, and coping strategies, but not illness parameters such as severity and duration for adaptation to ALS. The current study contributes to a better conceptualization of adjustment, allowing us to provide evidence-based support beyond medical and physical intervention for people with ALS.

  4. Tropical cyclone losses in the USA and the impact of climate change - A trend analysis based on data from a new approach to adjusting storm losses

    Economic losses caused by tropical cyclones have increased dramatically. Historical changes in losses are a result of meteorological factors (changes in the incidence of severe cyclones, whether due to natural climate variability or as a result of human activity) and socio-economic factors (increased prosperity and a greater tendency for people to settle in exposed areas). This paper aims to isolate the socio-economic effects and ascertain the potential impact of climate change on this trend. Storm losses for the period 1950-2005 have been adjusted to the value of capital stock in 2005 so that any remaining trend cannot be ascribed to socio-economic developments. For this, we introduce a new approach to adjusting losses based on the change in capital stock at risk. Storm losses are mainly determined by the intensity of the storm and the material assets, such as property and infrastructure, located in the region affected. We therefore adjust the losses to exclude increases in the capital stock of the affected region. No trend is found for the period 1950-2005 as a whole. In the period 1971-2005, since the beginning of a trend towards increased intense cyclone activity, losses excluding socio-economic effects show an annual increase of 4% per annum. This increase must therefore be at least due to the impact of natural climate variability but, more likely than not, also due to anthropogenic forcings.

  5. A Teaching Attitude Adjusting Model for College Teachers Based on Attitude Theories

    Zhang, Linying; Han, Zhijun

    2008-01-01

    Teaching attitude of teachers influences the accomplishment of teaching objectives, the quality of teaching and students' learning outcomes and is the main variable in the process of higher education teaching. By referring to attitude theories and group characteristics of college teachers, this paper analyzes the elements that influence teaching…

  6. Development of a three dimensional homogeneous calculation model for the BFS-62 critical experiment. Preparation of adjusted equivalent measured values for sodium void reactivity values. Final report

    The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system

  7. Crowdsourcing Based 3d Modeling

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  8. An Agent Based Classification Model

    Gu, Feng; Greensmith, Julie

    2009-01-01

    The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models which are applied to prob- lem solving. The Dendritic Cell Algorithm (DCA)[2] is an AIS algorithm that is developed specifi cally for anomaly detection. It has been successfully applied to intrusion detection in computer security. It is believed that agent-based mod- elling is an ideal approach for implementing AIS, as intelligent agents could be the perfect representations of immune entities in AIS. This model evaluates the feasibility of re-implementing the DCA in an agent-based simulation environ- ...

  9. Model-based Utility Functions

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  10. A comparison of two sleep spindle detection methods based on all night averages: individually adjusted versus fixed frequencies

    Péter Przemyslaw Ujma

    2015-02-01

    Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.

  11. Zebrafish collective behaviour in heterogeneous environment modeled by a stochastic model based on visual perception

    Collignon, Bertrand; Halloy, José

    2015-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impel to revise classical assumptions made in decisional algorithms. In this context, we present a new model describing the three dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a new stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as it is classically assumed by most models. We show that this model can spontaneously transits from consensus to choice. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based a...

  12. A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle Joint of a Bipedal Robot: Modeling & Simulation

    Aiman Omer

    2015-12-01

    Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.

  13. Thread Pool Size Adaptive Adjusting Algorithm Based on Segmentation%基于分段的线程池尺寸自适应调整算法

    孙旭东; 韩江洪; 刘征宇; 解新胜

    2011-01-01

    提出一种基于分段的线程池尺寸自适应调整算法.该算法将用户请求量分为上升段、平衡段和下降段3段,根据当前用户请求数、线程数自适应调整线程池尺寸,从而满足用户需求.实验结果表明,相比基于平均数的调整算法,该算法能更好地处理并发的用户请求,响应时间更短.%This paper presents a thread pool size adaptive adjusting algorithm based on segmentation. User request quantity is divided into rising balancing and dropping section. This algorithm changes the size of the thread pool based on present request and thread in order to meet need of user requirement. Experimental results show that this algorithm can effectively process multiple concurrent requests, its response time is shorter compared with adjusting algorithm based average value.

  14. Sliding Adjustment for 3D Video Representation

    Galpin Franck

    2002-01-01

    Full Text Available This paper deals with video coding of static scenes viewed by a moving camera. We propose an automatic way to encode such video sequences using several 3D models. Contrary to prior art in model-based coding where 3D models have to be known, the 3D models are automatically computed from the original video sequence. We show that several independent 3D models provide the same functionalities as one single 3D model, and avoid some drawbacks of the previous approaches. To achieve this goal we propose a novel algorithm of sliding adjustment, which ensures consistency of successive 3D models. The paper presents a method to automatically extract the set of 3D models and associate camera positions. The obtained representation can be used for reconstructing the original sequence, or virtual ones. It also enables 3D functionalities such as synthetic object insertion, lightning modification, or stereoscopic visualization. Results on real video sequences are presented.

  15. Trace-Based Code Generation for Model-Based Testing

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (formal) modeling language of the used tool and the general concept of modeling the system under test for effective test generation. A commonly used modeling notation is to describe the model through a...

  16. Adjusting Population Risk for Functional Health Status.

    Fuller, Richard L; Hughes, John S; Goldfield, Norbert I

    2016-04-01

    Risk adjustment accounts for differences in population mix by reducing the likelihood of enrollee selection by managed care plans and providing a correction to otherwise biased reporting of provider or plan performance. Functional health status is not routinely included within risk-adjustment methods, but is believed by many to be a significant enhancement to risk adjustment for complex enrollees and patients. In this analysis a standardized measure of functional health was created using 3 different source functional assessment instruments submitted to the Medicare program on condition of payment. The authors use a 5% development sample of Medicare claims from 2006 and 2007, including functional health assessments, and develop a model of functional health classification comprising 9 groups defined by the interaction of self-care, mobility, incontinence, and cognitive impairment. The 9 functional groups were used to augment Clinical Risk Groups, a diagnosis-based patient classification system, and when using a validation set of 100% of Medicare data for 2010 and 2011, this study found the use of the functional health module to improve the fit of observed enrollee cost, measured by the R(2) statistic, by 5% across all Medicare enrollees. The authors observed complex nonlinear interactions across functional health domains when constructing the model and caution that functional health status needs careful handling when used for risk adjustment. The addition of functional health status within existing risk-adjustment models has the potential to improve equitable resource allocation in the financing of care costs for more complex enrollees if handled appropriately. (Population Health Management 2016;19:136-144). PMID:26348621

  17. Labour Adjustment Costs and British Footwear Protection

    Winters, L. Alan

    1990-01-01

    Import protection is frequently advocated as a means of preserving jobs and avoiding labour adjustment costs. Defining adjustment costs in terms of output forgone during the process of adjustment and ignoring any general equilibrium repercussions, we estimate that quantitative restrictions on British footwear imports during 1979 protected about 1,000 jobs and avoided once-and-for-all adjustment costs of only around 1 million pounds sterling. The result is based on new data which reveal high r...

  18. Improved Based on "Self-Adaptive Turning Rate" Model Algorithm

    Xiuling He

    2013-06-01

    Full Text Available For tracking the object with tracking nonlinear, high maneuvering target,traditional interactive multiple model self-adaptive filter algorithm was usually adopted.The turning rate estimate was very important. However,the performance of turning rate algorithm was not so satisfactory in the  model.Thus,a new the average value turning rate algorithm based on self-adaptive turning model was proposed.Aiming at additional device for turning rate estimation turning model, the parameters α and β were introduced to adjust the roughness of turning rate.Aiming at target constant turning movement and orthogonal turning rate unequal, estimates, turning rate was used the average value model to reduce the noise and error influence. Simulation results showed that the proposed algorithm was more suitable for the objects with Nonlinear, high maneuvering target tracking and could remarkably reduce the sample,and thus achieve much better tracking performance.

  19. Tests of the Structure-Based Models of Proteins

    The structure-based models of proteins are defined through the condition that their ground state coincides with the native structure of the proteins. There are many variants of such models and they yield different properties. Optimal variants can be selected by making comparisons to experimental data on single-molecule stretching. Here, we discuss the 15 best performing variants and focus on fine tuning the selection process by adjusting the velocity of stretching to match the experimental conditions. The very best variant is found to correspond to the 10-12 potential in the native contacts with the energies modulated by the Miyazawa-Jernigan statistical potential and variable length parameters. The second best model incorporates the Lennard-Jones potential with uniform amplitudes. We then make a detailed comparison of the two models in which theoretical surveys of stretching properties of 7510 proteins were made previously. (authors)

  20. Dynamic model based on Bayesian method for energy security assessment

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  1. Kinematic synthesis of adjustable robotic mechanisms

    Chuenchom, Thatchai

    1993-01-01

    Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for

  2. Business value modeling based on BPMN models

    Masoumigoudarzi, Farahnaz

    2014-01-01

    In this study we will try to clarify the explanation of modeling and measuring 'Business Values', as it is defined in business context, in the business processes of a company and introduce different methods and select the one which is best for modeling the company's business values. These methods have been used by researchers in business analytics and senior managers of many companies. The focus in this project is business value detection and modeling. The basis of this research is on BPM...

  3. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  4. Differences among skeletal muscle mass indices derived from height-, weight-, and body mass index-adjusted models in assessing sarcopenia

    Kim, Kyoung Min; Jang, Hak Chul; Lim, Soo

    2016-01-01

    Aging processes are inevitably accompanied by structural and functional changes in vital organs. Skeletal muscle, which accounts for 40% of total body weight, deteriorates quantitatively and qualitatively with aging. Skeletal muscle is known to play diverse crucial physical and metabolic roles in humans. Sarcopenia is a condition characterized by significant loss of muscle mass and strength. It is related to subsequent frailty and instability in the elderly population. Because muscle tissue is involved in multiple functions, sarcopenia is closely related to various adverse health outcomes. Along with increasing recognition of the clinical importance of sarcopenia, several international study groups have recently released their consensus on the definition and diagnosis of sarcopenia. In practical terms, various skeletal muscle mass indices have been suggested for assessing sarcopenia: appendicular skeletal muscle mass adjusted for height squared, weight, or body mass index. A different prevalence and different clinical implications of sarcopenia are highlighted by each definition. The discordances among these indices have emerged as an issue in defining sarcopenia, and a unifying definition for sarcopenia has not yet been attained. This review aims to compare these three operational definitions and to introduce an optimal skeletal muscle mass index that reflects the clinical implications of sarcopenia from a metabolic perspective. PMID:27334763

  5. Adjustments and Depression

    Full Text Available ... adjustment and depression Adjustment to paralysis is a process of changing one's thoughts and feelings and is ... stem cell research? What is the clinical trials process? Get support Ask us anything Get a peer ...

  6. Adjustments and Depression

    Full Text Available ... depression and adjustment to my SCI? How do I deal with depression and adjustment to my SCI? ☷ ▾ ... following an SCI? What are the secondary conditions? I have no health insurance, what are my options? ...

  7. Annual Adjustment Factors

    Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...

  8. Adjustments and Depression

    Full Text Available ... course an adjustment period as you navigate your new normal. The most important point to remember is ... to rebuild one's identity and to find a new balance in relationships. The stages of adjustment can ...

  9. Optimal pricing decision model based on activity-based costing

    王福胜; 常庆芳

    2003-01-01

    In order to find out the applicability of the optimal pricing decision model based on conventional costbehavior model after activity-based costing has given strong shock to the conventional cost behavior model andits assumptions, detailed analyses have been made using the activity-based cost behavior and cost-volume-profitanalysis model, and it is concluded from these analyses that the theory behind the construction of optimal pri-cing decision model is still tenable under activity-based costing, but the conventional optimal pricing decisionmodel must be modified as appropriate to the activity-based costing based cost behavior model and cost-volume-profit analysis model, and an optimal pricing decision model is really a product pricing decision model construc-ted by following the economic principle of maximizing profit.

  10. Sensor-based interior modeling

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation

  11. DEVELOPMENT OF INDIVIDUAL TREE GROWTH MODELS BASED ON DIFFERENTIAL EQUATIONS

    Breno Rodrigues Mendes

    2006-09-01

    Full Text Available This study generate individual tree non-linear models from differential equation and evaluated the adjustment quality to express the basal area growth. The data base is from continuous forest inventory of clonal Eucalyptus spp. plantations, given by Aracruz Cellulose Company, located in the Brazilian costal region, Bahia and Espirito Santo states. The model precision was verified by ratio likelihood test, by mean square error (MSE and by graphical residual analysis. The results showed that the complete model with 3 parameters, developed from the original model with one regressor, was superior to the other models, due to the inclusion of stand based variables, such as: clone, total height (HT, dominant height (HD, quadratic diameter (Dg, Basal Area (G, site index (IS and Density (N, generating a new model, called Complete Model III. The improvement of the precision was highly significant when compared to another models. Consequently, this model provides information with a high degree of precision and accuracy for the forest companies planning.

  12. COMPARISON OF ADJUSTABLE HIGH-PHASE ORDER INDUCTION MOTORS’ MERITS

    V.S. Petrushin

    2016-03-01

    Full Text Available Purpose. Development of mathematical models of adjustable electrical drives with high-phase order induction motors for their merits analysis at static and dynamical modes. Methodology. At the mathematical modeling main kinds of physical processes taking place in the high-phase order induction motors are considered: electromagnetic, electromechanical, energetic, thermal, mechanical, vibroacoustic ones. Besides, functional as well as mass, frame and value indicators of frequency converters are taking into account which permits to consider technical and economical aspects of the adjustable induction electrical drives. Creation of high-phase order induction motors’ modifications in possible on the base of a stock 3-phase motors of basic design. Polyphase supply of induction motors is guaranteed by a number of the adjustable electrical drives’ power circuits. Results. Modelling of a number of adjustable electrical drives with induction motors of different phase number working on the same load by its character, value and required adjustment range is carried out. At the utilization of the family of characteristics including mechanical ones at different adjustment parameters on which loading mechanism’s characteristics are superimposed regulation curves representing dependences of electrical, energetic, thermal, mechanical, vibroacoustic quantities on the motors’ number of revolutions are obtained. Originality. The proposed complex models of adjustable electrical drives with high-phase order induction motors give a possibility to carry out the grounded choice of the drive’s acceptable variant. Besides, they can be used as design models at the development of adjustable high-phase order induction motors. Practical value. The investigated change of vibroacoustic indicators at static and dynamical modes has been determined decrease of these indicators in the drives with number of phase exceeding 3.

  13. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers

    Treuer, H.; Hoevels, M.; Luyken, K.; Gierich, A.; Kocher, M.; Müller, R.-P.; Sturm, V.

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  14. Memristor model based on fuzzy window function

    Abdel-Kader, Rabab Farouk; Abuelenin, Sherif M.

    2016-01-01

    Memristor (memory-resistor) is the fourth passive circuit element. We introduce a memristor model based on a fuzzy logic window function. Fuzzy models are flexible, which enables the capture of the pinched hysteresis behavior of the memristor. The introduced fuzzy model avoids common problems associated with window-function based memristor models, such as the terminal state problem, and the symmetry issues. The model captures the memristor behavior with a simple rule-base which gives an insig...

  15. Multivariate Models of Parent-Late Adolescent Gender Dyads: The Importance of Parenting Processes in Predicting Adjustment

    McKinney, Cliff; Renk, Kimberly

    2008-01-01

    Although parent-adolescent interactions have been examined, relevant variables have not been integrated into a multivariate model. As a result, this study examined a multivariate model of parent-late adolescent gender dyads in an attempt to capture important predictors in late adolescents' important and unique transition to adulthood. The sample…

  16. The rise and fall of divorce - a sociological adjustment of becker’s model of the marriage market

    Andersen, Signe Hald; Hansen, Lars Gårn

    Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... realigns the model’s predictions with the observed trends. The extension builds on Becker’s own claim that partners match on preference for partner specialization, and, as a novelty, on additional sociological theory claiming that preference coordination tend to happen subconsciously. When we incorporate...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...

  17. Red list of Czech spiders: 3rd edition, adjusted according to evidence-based national conservation priorities

    Řezáč, M.; Kůrka, A.; Růžička, Vlastimil; Heneberg, P.

    2015-01-01

    Roč. 70, č. 5 (2015), s. 645-666. ISSN 0006-3088 Grant ostatní: MZe(CZ) RO0415 Institutional support: RVO:60077344 Keywords : evidence-based conservation * extinction risk * invertebrate surveys Subject RIV: EG - Zoology Impact factor: 0.827, year: 2014

  18. How inputs of an hydrologic model have to be adjusted to its underlying physical hypothesis? Case study on the Lez hydrodynamic modeling (Southern France)

    Siou, L. Kong A.; Fleury, P.; Johannet, A.; Borrell Estupina, V.; Dörfliger, N.; Pistre, S.

    2012-04-01

    Karst aquifers are famous for their high heterogeneity and non-linearity and are currently badly understood whereas they are a major issue on both flood forecasting and water resources. Conceptual models, for example based on reservoir concept, are often used in order to simulate their behavior (1). Nevertheless reservoir models are sensitive to their initial conditions, which are often difficult to measure because of the heterogeneity. Consequently a lot of research is devoted to black-box modeling, particularly neural networks, which can be viewed as an interesting method to deal with non-linearity without other measurement acquisition than system input and output (2). In Mediterranean regions, due to the variability of rainfalls during the hydrologic cycle, the availability of water during summer poses a difficulty to stakeholder. Consequently, the conurbation of Montpellier (400 000 inhabitants), Southeast France, investigates pumping through boreholes in the drain of the Lez spring (3), the major outlet of Lez karstic aquifer, studied from most than 40 years and emblematic of the complexity of karst aquifers. Indeed, the heterogeneity of both the karst system due to geologic complexity, and of rainfalls, joint with the emptying of the spring by pumping contribute to modeling difficulties. Thereby it seems relevant to use neural networks, as non-linear machine learning models, in order to manage the lack of knowledge about the Lez system. The aim of the modeling approach was to simulate the level of water in the drain of the Lez spring in order to better appreciate the level of emptying during summer just before refilling by the autumn rainfalls. To this end the multilayer perceptron was used thanks to its two main properties: universal approximation and parsimony regarding to other non linear statistical model. Particularly, the role of evapotranspiration is not well defined or estimated for karst aquifers (4,5) whereas it is of major importance for water

  19. Guide to APA-Based Models

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  20. CEAI: CCM based Email Authorship Identification Model

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  1. Model Adjustment and Intelligent Controller Design for Synchronous Generator of a PWR for CHASNUPP through Simulation Experiments

    Mathematical model of the synchronous generator of PWR is developed in state space from using first engineering principles approach. System matrices are generated using original machine parameter in MATALAB. Internet stability of the model is analyzed using eigenvalue analysis. Two closed loop simulation models are developed in MATLAB SIMULINK environment using PI controller and NN NARMA neuro-controller. Simulation results are analyzed and compared and it is found that the performance of an intelligent controller is better both in transient and steady state conditions. The proposed intelligent controller has excellent grid disturbances rejection tendency and stability. (author)

  2. Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    Mehran Sattari; Mohammad Saadatseresht; Mozhdeh Shahbazi; Saeid Homayouni

    2011-01-01

    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientatio...

  3. School-Based Racial and Gender Discrimination among African American Adolescents: Exploring Gender Variation in Frequency and Implications for Adjustment

    Cogburn, Courtney D.; Chavous, Tabbye M.; Griffin, Tiffany M.

    2011-01-01

    The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple reg...

  4. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  5. Evaluation of iodide deficiency in the lactating rat and pup using a biologically based dose response (BBDR) Model***

    A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (HPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the 1-IPT axis. Model calibrations, carried out by adjusting key model...

  6. Evaluation of iodide deficiency in the lactating rat and pup using a biologically based dose-response model

    A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (BPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the HPT axis. Model calibrations, carried out by adjusting key model p...

  7. A longitudinal examination of the Adaptation to Poverty-Related Stress Model: predicting child and adolescent adjustment over time.

    Wadsworth, Martha E; Rindlaub, Laura; Hurwich-Reiss, Eliana; Rienks, Shauna; Bianco, Hannah; Markman, Howard J

    2013-01-01

    This study tests key tenets of the Adaptation to Poverty-related Stress Model. This model (Wadsworth, Raviv, Santiago, & Etter, 2011 ) builds on Conger and Elder's family stress model by proposing that primary control coping and secondary control coping can help reduce the negative effects of economic strain on parental behaviors central to the family stress model, namely, parental depressive symptoms and parent-child interactions, which together can decrease child internalizing and externalizing problems. Two hundred seventy-five co-parenting couples with children between the ages of 1 and 18 participated in an evaluation of a brief family strengthening intervention, aimed at preventing economic strain's negative cascade of influence on parents, and ultimately their children. The longitudinal path model, analyzed at the couple dyad level with mothers and fathers nested within couple, showed very good fit, and was not moderated by child gender or ethnicity. Analyses revealed direct positive effects of primary control coping and secondary control coping on mothers' and fathers' depressive symptoms. Decreased economic strain predicted more positive father-child interactions, whereas increased secondary control coping predicted less negative mother-child interactions. Positive parent-child interactions, along with decreased parent depression and economic strain, predicted child internalizing and externalizing over the course of 18 months. Multiple-group models analyzed separately by parent gender revealed, however, that child age moderated father effects. Findings provide support for the adaptation to poverty-related stress model and suggest that prevention and clinical interventions for families affected by poverty-related stress may be strengthened by including modules that address economic strain and efficacious strategies for coping with strain. PMID:23323863

  8. Fujisaki Model Based Intonation Modeling for Korean TTS System

    Kim, Byeongchang; Lee, Jinsik; Lee, Gary Geunbae

    One of the enduring problems in developing high-quality TTS (text-to-speech) system is pitch contour generation. Considering language specific knowledge, an adjusted Fujisaki model for Korean TTS system is introduced along with refined machine learning features. The results of quantitative and qualitative evaluations show the validity of our system: the accuracy of the phrase command prediction is 0.8928; the correlations of the predicted amplitudes of a phrase command and an accent command are 0.6644 and 0.6002, respectively; our method achieved the level of "fair" naturalness (3.6) in a MOS scale for generated F0 curves.

  9. Trace-Based Code Generation for Model-Based Testing

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  10. The effect of adjusting model inputs to achieve mass balance on time-dynamic simulations in a food-web model of Lake Huron

    Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.

    2014-01-01

    Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low

  11. Rule-based decision making model

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  12. Application of Fuzzy Self-Optimizing Control Based on Differential Evolution Algorithm for the Ratio of Wind to Coal Adjustment of Boiler in the Thermal Power Plant

    Ting Hou

    2014-08-01

    Full Text Available The types of coal are multiplex in domestic small and medium sized boilers, and with unstable ingredients, the method of maintaining the amount of wind and coal supply in a fixed proportion of the wind adjustment does not always ensure the best economical boiler combustion process, the key of optimizing combustion is to modify reasonable proportion of wind and coal online. In this paper, a kind of fuzzy self-optimizing control based on differential evolution algorithm is proposed, which applied in the power plant boiler system, the boiler combustion efficiency has been significantly improved than previous indirect control. In this paper, a thermal power plant is our research object, in the case of determining the optimum system performance, the unit efficiency can be increased significantly using this method, and the important issues of energy efficiency of power plants can be successfully solved.

  13. Attachment-based classifications of children's family drawings: psychometric properties and relations with children's adjustment in kindergarten.

    Pianta, R C; Longmaid, K; Ferguson, J E

    1999-06-01

    Investigated an attachment-based theoretical framework and classification system, introduced by Kaplan and Main (1986), for interpreting children's family drawings. This study concentrated on the psychometric properties of the system and the relation between drawings classified using this system and teacher ratings of classroom social-emotional and behavioral functioning, controlling for child age, ethnic status, intelligence, and fine motor skills. This nonclinical sample consisted of 200 kindergarten children of diverse racial and socioeconomic status (SES). Limited support for reliability of this classification system was obtained. Kappas for overall classifications of drawings (e.g., secure) exceeded .80 and mean kappa for discrete drawing features (e.g., figures with smiles) was .82. Coders' endorsement of the presence of certain discrete drawing features predicted their overall classification at 82.5% accuracy. Drawing classification was related to teacher ratings of classroom functioning independent of child age, sex, race, SES, intelligence, and fine motor skills (with p values for the multivariate effects ranging from .043-.001). Results are discussed in terms of the psychometric properties of this system for classifying children's representations of family and the limitations of family drawing techniques for young children. PMID:10353083

  14. A Novel Variable Step Size Adjustment Method Based on Autocorrelation of Error Signal for the Constant Modulus Blind Equalization Algorithm

    M. A. Demir

    2012-04-01

    Full Text Available Blind equalization is a technique for adaptive equalization of a communication channel without the use of training sequence. Although the constant modulus algorithm (CMA is one of the most popular adaptive blind equalization algorithms, because of using fixed step size it suffers from slow convergence rate. A novel enhanced variable step size CMA algorithm (VSS-CMA based on autocorrelation of error signal has been proposed to improve the weakness of CMA for application to blind equalization in this paper. The new algorithm resolves the conflict between the convergence rate and precision of the fixed step-size conventional CMA algorithm. Computer simulations have been performed to illustrate the performance of the proposed method in simulated frequency selective Rayleigh fading channels and experimental real communication channels. The obtained simulation results using single carrier (SC IEEE 802.16-2004 protocol have demonstrated that the proposed VSS-CMA algorithm has considerably better performance than conventional CMA, normalized CMA (N-CMA and the other VSS-CMA algorithms.

  15. Effects of the Oregon Model of Parent Management Training (PMTO) on Marital Adjustment in New Stepfamilies: A Randomized Trial

    Bullard, Lisha; Wachlarowicz, Marissa; DeLeeuw, Jamie; Snyder, James; LOW, SABINA; Forgatch, Marion; DeGarmo, David

    2010-01-01

    Effects of intervention with the Oregon model of Parent Management Training (PMTO™) on marital relationship processes and marital satisfaction in recently married biological mother and stepfather couples were examined. Sixty-seven of the 110 participating families were randomly assigned to PMTO, and 43 families to a non-intervention condition. Intervention had reliable positive indirect effects on marital relationship processes 24 months after baseline which in turn were associated with highe...

  16. Internet Business Valuation : Case Study of two Korean and two Swedish Internet Companies using Adjusted DCF Valuation Model

    Kim Walsgård, Mijung

    2006-01-01

    Internet business valuation is a challenge because internet business has unique features and thus, needs to consider other issues than the traditional income statement and balance sheet numbers. Furthermore, many of the internet companies have generated negative accounting income in spite of extraordinarily high stock price. Given these circumstances, the advanced valuation model for internet business with consideration of the high growth potential capacity is apparently needed. This paper is...

  17. Improvement of Radar Quantitative Precipitation Estimation Based on Real-Time Adjustments to Z-R Relationships and Inverse Distance Weighting Correction Schemes

    WANG Gaili; LIU Liping; DING Yuanyuan

    2012-01-01

    The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations.In this study,a real-time adjustment to the radar reflectivity-rainfall rates (Z R) relationship scheme and the gauge-corrected,radar-based,estimation scheme with inverse distance weighting interpolation was developed.Based on the characteristics of the two schemes,the two-step correction technique of radar quantitative precipitation estimation is proposed.To minimize the errors between radar quantitative precipitation estimations and rain gauge observations,a real-time adjustnent to the Z-R relationship scheme is used to remove systematic bias on the time-domain.The gauge-corrected,radar-based,estination scheme is then used to eliminate non-uniform errors in space.Based on radar data and rain gauge observations near the Huaihe River,the two-step correction technique was evaluated using two heavy-precipitation events.The results show that the proposed scheme improved not only in the underestination of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.

  18. Oil Shocks and Macroeconomic Adjustment: a DSGE modeling approach for the Case of Libya, 1970–2007

    Issa Ali

    2011-01-01

    Full Text Available Libya experienced a substantial increase in oil revenue as a result of increased oil prices during the period of the late 1970s and early 1980s, and again after 2000. Recent increases in oil production and the price of oil, and their positive and negative macroeconomic impacts upon key macroeconomic variables, are of considerable contemporary importance to an oil dependent economy such as that of Libya. In this paper a dynamic macroeconomic model is developed for Libya to evaluate the effects of additional oil revenue, arising from positive oil production and oil price shocks, upon key macroeconomic variables, including the real exchange rate. It takes into consideration the impact of oil revenue upon the non-oil trade balance, foreign asset stock, physical capital stock, human capital stock, imported capital stock and non-oil production. Model simulation results indicate that additional oil revenue brings about: an increase in government revenue, increased government spending in the domestic economy, increased foreign asset stocks, increased output and wages in the non oil sector. However, increased oil revenue may also produce adverse consequences, particularly upon the non-oil trade balance, arising from a loss of competitiveness of non-oil tradable goods induced by an appreciation of the real exchange rate and increased imports stimulated by increased real income. Model simulation results also suggest that investment stimulating policy measures by government produce the most substantive benefits for the economy.

  19. Comparisons of satellite-based models for estimating evapotranspiration fluxes

    Consoli, S.; Vanella, D.

    2014-05-01

    Two different types of remote sensing-based techniques were applied to assess the mass and energy exchange process within the continuum soil-plant-atmosphere of a typical Mediterranean crop. The first approach computes a surface energy balance using the radiometric surface temperature (Ts) for estimating the sensible heat flux (H), and obtaining the evapotranspiration fluxes (ET) as a residual of the energy balance. In the paper, the performance of two different surface energy balance approaches (i.e. one-source and two-source (soil + vegetation)) was compared. The second approach uses vegetation indices (VIs), derived from the canopy reflectance, within the FAO-based soil water balance approach to estimate basal crop coefficients to adjust reference ET0 and compute crop ET. Outputs from these models were compared to fluxes of sensible (H) and latent (LE) heat directly measured by the Eddy Covariance method, through a long micrometeorological monitoring campaign carried out in the area of interest. The two-source (2S) model gave the best performance in terms of surface energy fluxes and ET rate estimation, although the overall performance of the three approaches was appreciable. The reflectance-based crop coefficient model has the advantages to do not require any upscaling of the instantaneous ET fluxes from the energy balance models to daily integrated ET. However, its results may be less sensitive to detect crop water stress conditions respect to approaches based on the radiometric surface temperature detection.

  20. CALORIMETER-BASED ADJUSTMENT OF MULTIPLICITY DETERMINED 240PU EFF KNOWN-A ANALYSIS FOR THE ASSAY OF PLUTONIUM

    Dubose, F.

    2012-02-21

    In nuclear material processing facilities, it is often necessary to balance the competing demands of accuracy and throughput. While passive neutron multiplicity counting is the preferred method for relatively fast assays of plutonium, the presence of low-Z impurities (fluorine, beryllium, etc.) rapidly erodes the assay precision of passive neutron counting techniques, frequently resulting in unacceptably large total measurement uncertainties. Conversely, while calorimeters are immune to these impurity effects, the long count times required for high accuracy can be a hindrance to efficiency. The higher uncertainties in passive neutron measurements of impure material are driven by the resulting large (>>2) {alpha}-values, defined as the ({alpha},n):spontaneous fission neutron emission ratio. To counter impurity impacts for high-{alpha} materials, a known-{alpha} approach may be adopted. In this method, {alpha} is determined for a single item using a combination of gamma-ray and calorimetric measurements. Because calorimetry is based on heat output, rather than a statistical distribution of emitted neutrons, an {alpha}-value determined in this way is far more accurate than one determined from passive neutron counts. This fixed {alpha} value can be used in conventional multiplicity analysis for any plutonium-bearing item having the same chemical composition and isotopic distribution as the original. With the results of single calorimeter/passive neutron/gamma-ray measurement, these subsequent items can then be assayed with high precision and accuracy in a relatively short time, despite the presence of impurities. A calorimeter-based known-{alpha} multiplicity analysis technique is especially useful when requiring rapid, high accuracy, high precision measurements of multiple plutonium bearing items having a common source. The technique has therefore found numerous applications at the Savannah River Site. In each case, a plutonium (or mixed U/Pu) bearing item is divided

  1. Early Parental Adjustment and Bereavement after Childhood Cancer Death

    Barrera, Maru; O'connor, Kathleen; D'Agostino, Norma Mammone; Spencer, Lynlee; Nicholas, David; Jovcevska, Vesna; Tallet, Susan; Schneiderman, Gerald

    2009-01-01

    This study comprehensively explored parental bereavement and adjustment at 6 months post-loss due to childhood cancer. Interviews were conducted with 18 mothers and 13 fathers. Interviews were transcribed verbatim and analyzed based on qualitative methodology. A model describing early parental bereavement and adaptation emerged with 3 domains:…

  2. Exploring Aspects of Coordination by Mutual Adjustment in Fluid Teams:

    Thomsen, Svend Erik

    This chapter applies an agent-based modeling approach to explore some aspects of team coordination by mutual adjustments. The teams considered here are cross functional teams, either co-located or distributed where individuals with specialized knowledge and skills work simultaneously together to...

  3. Adjusted Rasch person-fit statistics.

    Dimitrov, Dimiter M; Smith, Richard M

    2006-01-01

    Two frequently used parametric statistics of person-fit with the dichotomous Rasch model (RM) are adjusted and compared to each other and to their original counterparts in terms of power to detect aberrant response patterns in short tests (10, 20, and 30 items). Specifically, the cube root transformation of the mean square for the unweighted person-fit statistic, t, and the standardized likelihood-based person-fit statistic Z3 were adjusted by estimating the probability for correct item response through the use of symmetric functions in the dichotomous Rasch model. The results for simulated unidimensional Rasch data indicate that t and Z3 are consistently, yet not greatly, outperformed by their adjusted counterparts, denoted t* and Z3*, respectively. The four parametric statistics, t, Z3, t*, and Z3*, were also compared to a non-parametric statistic, HT, identified in recent research as outperforming numerous parametric and non-parametric person-fit statistics. The results show that HT substantially outperforms t, Z3, t*, and Z3* in detecting aberrant response patterns for 20-item and 30-item tests, but not for very short tests of 10 items. The detection power of t, Z3, t*, and Z3*, and HT at two specific levels of Type I error, .10 and .05 (i.e., up to 10% and 5% false alarm rate, respectively), is also reported. PMID:16632900

  4. Electrical Compact Modeling of Graphene Base Transistors

    Sébastien Frégonèse

    2015-11-01

    Full Text Available Following the recent development of the Graphene Base Transistor (GBT, a new electrical compact model for GBT devices is proposed. The transistor model includes the quantum capacitance model to obtain a self-consistent base potential. It also uses a versatile transfer current equation to be compatible with the different possible GBT configurations and it account for high injection conditions thanks to a transit time based charge model. Finally, the developed large signal model has been implemented in Verilog-A code and can be used for simulation in a standard circuit design environment such as Cadence or ADS. This model has been verified using advanced numerical simulation.

  5. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  6. Theory of adaptive adjustment

    Weihong Huang

    2000-01-01

    Full Text Available Conventional adaptive expectation as a mechanism of stabilizing an unstable economic process is reexamined through a generalization to an adaptive adjustment framework. The generic structures of equilibria that can be stabilized through an adaptive adjustment mechanism are identified. The generalization can be applied to a broad class of discrete economic processes where the variables interested can be adjusted or controlled directly by economic agents such as in cobweb dynamics, Cournot games, Oligopoly markets, tatonnement price adjustment, tariff games, population control through immigration etc.

  7. Modeling crop water use in an irrigated maize cropland using a biophysical process-based model

    Ding, Risheng; Kang, Shaozhong; Du, Taisheng; Hao, Xinmei; Tong, Ling

    2015-10-01

    Accurate modeling of crop water use or evapotranspiration (ET) is needed to understand the hydrologic cycle and improve water use efficiency. Biophysical process-based multilayer models can capture details of the nonlinear interaction between microclimate and physiology within the canopy and thus accurately simulate ET. In this study, we extended a process-based multilayer model, ACASA, which explicitly simulated many of the nonlinear biophysical processes within each of ten crop canopy sublayers and then integrated to represent the complete crop canopy. Based on the original ACASA model, we made the improved modifications including four added modules (C4 crop photosynthesis, water stress response of stomatal conductance, crop morphological changes, and heterogeneous root water uptake), and two adjusted calculation procedures (soil evaporation resistance and hydraulic characteristic parameters). Key processes were parameterized for the improved ACASA model using observations. The simulated canopy ET was validated using eddy covariance measurements over an irrigated maize field in an arid inland region of northwest China. The improved ACASA model predicted maize ET for both half-hourly and daily time-scales. The improved model also predicted the reduction in maize ET under the condition of soil water deficit. Soil evaporation, an important component of maize ET, was also satisfactorily simulated in the improved model. Compared to the original ACASA model, the improved model yielded an improved estimation of maize ET. Using the improved model, we found that maize ET was nonlinearly affected by changes in leaf area index and photosynthetic capacity through canopy conductance. In general, the improved ACASA model, a biophysical process-based multilayer model, can be used to diagnose and predict crop ET, and draw some insights into the nonlinear interactions between crop canopy and ambient environment.

  8. EPR-based material modelling of soils

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  9. Model-based DSL frameworks

    Kurtev, I.; Bézivin, J.; Jouault, F.; Valduriez, P.

    2006-01-01

    More than five years ago, the OMG proposed the Model Driven Architecture (MDA™) approach to deal with the separation of platform dependent and independent aspects in information systems. Since then, the initial idea of MDA evolved and Model Driven Engineering (MDE) is being increasingly promoted to

  10. Effects of the Oregon model of Parent Management Training (PMTO) on marital adjustment in new stepfamilies: a randomized trial.

    Bullard, Lisha; Wachlarowicz, Marissa; DeLeeuw, Jamie; Snyder, James; Low, Sabina; Forgatch, Marion; DeGarmo, David

    2010-08-01

    Effects of intervention with the Oregon model of Parent Management Training (PMTO) on marital relationship processes and marital satisfaction in recently married biological mother and stepfather couples were examined. Sixty-seven of the 110 participating families were randomly assigned to PMTO, and 43 families to a non-intervention condition. Intervention had reliable positive indirect effects on marital relationship processes 24 months after baseline which in turn were associated with higher marital satisfaction. These indirect effects were mediated by the impact of PMTO on parenting practices 6 months after baseline. Enhanced parenting practices resulting from PMTO prevented escalation of subsequent child behavior problems at school. Consistent with a family systems perspective and research on challenges to marital quality in stepfamilies, improved co-parenting practices were associated with enhanced marital relationship skills and marital satisfaction as well as with prevention of child behavior problems. PMID:20731495

  11. Non-Nested Models and the Likelihood Ratio Statistic: A Comparison of Simulation and Bootstrap Based Tests

    Kapetanios, George; Weeks, Melvyn J.

    2003-01-01

    We consider an alternative use of simulation in the context of using the Likelihood-Ratio statistic to test non-nested models. To date simulation has been used to estimate the Kullback-Leibler measure of closeness between two densities, which in turn ?mean adjusts? the Likelihood-Ratio statistic. Given that this adjustment is still based upon asymptotic arguments, an alternative procedure is to utilise bootstrap procedures to construct the empirical density. To our knowledge this study re...

  12. The Culture Based Model: Constructing a Model of Culture

    Young, Patricia A.

    2008-01-01

    Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…

  13. Finite mixture models and model-based clustering

    Volodymyr Melnykov

    2010-01-01

    Full Text Available Finite mixture models have a long history in statistics, having been used to model population heterogeneity, generalize distributional assumptions, and lately, for providing a convenient yet formal framework for clustering and classification. This paper provides a detailed review into mixture models and model-based clustering. Recent trends as well as open problems in the area are also discussed.

  14. MCCB warm adjustment testing concept

    Erdei, Z.; Horgos, M.; Grib, A.; Preradović, D. M.; Rodic, V.

    2016-08-01

    This paper presents an experimental investigation in to operating of thermal protection device behavior from an MCCB (Molded Case Circuit Breaker). One of the main functions of the circuit breaker is to assure protection for the circuits where mounted in for possible overloads of the circuit. The tripping mechanism for the overload protection is based on a bimetal movement during a specific time frame. This movement needs to be controlled and as a solution to control this movement we choose the warm adjustment concept. This concept is meant to improve process capability control and final output. The warm adjustment device design will create a unique adjustment of the bimetal position for each individual breaker, determined when the testing current will flow thru a phase which needs to trip in a certain amount of time. This time is predetermined due to scientific calculation for all standard types of amperages and complies with the IEC 60497 standard requirements.

  15. Probabilistic Model-Based Safety Analysis

    Güdemann, Matthias; 10.4204/EPTCS.28.8

    2010-01-01

    Model-based safety analysis approaches aim at finding critical failure combinations by analysis of models of the whole system (i.e. software, hardware, failure modes and environment). The advantage of these methods compared to traditional approaches is that the analysis of the whole system gives more precise results. Only few model-based approaches have been applied to answer quantitative questions in safety analysis, often limited to analysis of specific failure propagation models, limited types of failure modes or without system dynamics and behavior, as direct quantitative analysis is uses large amounts of computing resources. New achievements in the domain of (probabilistic) model-checking now allow for overcoming this problem. This paper shows how functional models based on synchronous parallel semantics, which can be used for system design, implementation and qualitative safety analysis, can be directly re-used for (model-based) quantitative safety analysis. Accurate modeling of different types of proba...

  16. Design of a cost-effective, hemodynamically adjustable model for resuscitative endovascular balloon occlusion of the aorta (REBOA) simulation.

    Keller, Benjamin A; Salcedo, Edgardo S; Williams, Timothy K; Neff, Lucas P; Carden, Anthony J; Li, Yiran; Gotlib, Oren; Tran, Nam K; Galante, Joseph M

    2016-09-01

    Resuscitative endovascular balloon occlusion of the aorta (REBOA) is an adjunct technique for salvaging patients with noncompressible torso hemorrhage. Current REBOA training paradigms require large animals, virtual reality simulators, or human cadavers for acquisition of skills. These training strategies are expensive and resource intensive, which may prevent widespread dissemination of REBOA. We have developed a low-cost, near-physiologic, pulsatile REBOA simulator by connecting an anatomic vascular circuit constructed out of latex and polyvinyl chloride tubing to a commercially available pump. This pulsatile simulator is capable of generating cardiac outputs ranging from 1.7 to 6.8 L/min with corresponding arterial blood pressures of 54 to 226/14 to 121 mmHg. The simulator accommodates a 12 French introducer sheath and a CODA balloon catheter. Upon balloon inflation, the arterial waveform distal to the occlusion flattens, distal pulsation within the simulator is lost, and systolic blood pressures proximal to the balloon catheter increase by up to 62 mmHg. Further development and validation of this simulator will allow for refinement, reduction, and replacement of large animal models, costly virtual reality simulators, and perfused cadavers for training purposes. This will ultimately facilitate the low-cost, high-fidelity REBOA simulation needed for the widespread dissemination of this life-saving technique. PMID:27270855

  17. P-Graph-based Workflow Modelling

    József Tick

    2007-01-01

    Workflow modelling has been successfully introduced and implemented in severalapplication fields. Therefore, its significance has increased dramatically. Several work flowmodelling techniques have been published so far, out of which quite a number arewidespread applications. For instance the Petri-Net-based modelling has become popularpartly due to its graphical design and partly due to its correct mathematical background.The workflow modelling based on Unified Modelling Language is important...

  18. Stochastic Modelling for Condition Based Maintenance

    Han, Zehan

    2015-01-01

    This Master's thesis covers almost all aspects of Condition Based Maintenance (CBM). All objectives in Chapter 1 are met. The thesis is mainly comprised of three parts. First part introduces the world of CBM to readers. This part presents data acquisition, data processing and databases, which are the foundation to CBM. Then it highlights models which are divided into physics based models, data-driven models and hybrid models, for diagnostic and prognostic use. Three promising diagnostic and p...

  19. Base Flow Model Validation Project

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets...

  20. Firm Based Trade Models and Turkish Economy

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  1. ADJUSTABLE CHIP HOLDER

    2009-01-01

    An adjustable microchip holder for holding a microchip is provided having a plurality of displaceable interconnection pads for connecting the connection holes of a microchip with one or more external devices or equipment. The adjustable microchip holder can fit different sizes of microchips with ...

  2. Distributed Prognostics Based on Structural Model Decomposition

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  3. Lévy-based growth models

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  4. Self-Adjusting Stack Machines

    Hammer, Matthew A; Chen, Yan; Acar, Umut A

    2011-01-01

    Self-adjusting computation offers a language-based approach to writing programs that automatically respond to dynamically changing data. Recent work made significant progress in developing sound semantics and associated implementations of self-adjusting computation for high-level, functional languages. These techniques, however, do not address issues that arise for low-level languages, i.e., stack-based imperative languages that lack strong type systems and automatic memory management. In this paper, we describe techniques for self-adjusting computation which are suitable for low-level languages. Necessarily, we take a different approach than previous work: instead of starting with a high-level language with additional primitives to support self-adjusting computation, we start with a low-level intermediate language, whose semantics is given by a stack-based abstract machine. We prove that this semantics is sound: it always updates computations in a way that is consistent with full reevaluation. We give a comp...

  5. Traceability in Model-Based Testing

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  6. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  7. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  8. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  9. A train dispatching model based on fuzzy passenger demand forecasting during holidays

    Fei Dou Dou; Jie Xu; Li Wang; Limin Jia

    2013-01-01

    Abstract: Purpose: The train dispatching is a crucial issue in the train operation adjustment when passenger flow outbursts. During holidays, the train dispatching is to meet passenger demand to the greatest extent, and ensure safety, speediness and punctuality of the train operation. In this paper, a fuzzy passenger demand forecasting model is put up, then a train dispatching optimization model is established based on passenger demand so as to evacuate stranded passengers effectively during...

  10. 基于剔除机械效应的资本结构调整速度再检验%The Retest of Capital Structure′s Adjustment Speed Based on Exclusion of Mechanical Effects

    顾乃康; 邓剑兰

    2014-01-01

    The reason why research on capital structure′s adjustment speed has not yet won unanimous conclusion is that there may be a “mechanical effect” in conventional partial adjustment model .By decomposing the accounting identity of asset-liability ratio, this paper eliminates the mechanical effects caused by the change in asset size .And based on panel data of listed compa-nies in Shenzhen Stock Market and Shanghai Stock Market from 1998 to 2011 which issue A-shares only , this paper uses the two-stage estimation method to retest the adjustment speed and mean reversion of capital structure which are caused by active finan -cing behaviors of corporates .The empirical results show that:The adjustment speed of capital structure is about 13%under tra-ditional estimation which uses the change of asset-liability ratio as the dependent variable;but after excluding mechanical effects caused by changes in assets , the adjustment speed significantly drops to around 5%.The results show that , by excluding such mechanical effects , the corporate financing behaviors still induce mean reversion phenomenon of capital structure , but the speed of adjustment is so low that the trend of mean reversion is not obvious .%关于资本结构调整速度的研究至今仍未获得一致结论的原因之一在于传统的部分调整模型可能存在机械效应。通过对账面资产负债率的会计恒等式进行分解,剔除由资产规模变动引起的机械效应,在此基础上以1998年至2011年中国仅发行A股的非金融类上市公司面板数据为样本,采用两阶段估计法重新检验企业主动的融资行为对资本结构调整速度和均值反转的影响。研究结果表明,在传统的采用资产负债率变动作为因变量的估计下得到的资本结构调整速度约为13%,而剔除由资产规模变动引起的机械效应之后估计得到的调整速度明显下降到5%左右。这意味着在剔除由资产规模变动引起

  11. A "signal on" protection-displacement-hybridization-based electrochemical hepatitis B virus gene sequence sensor with high sensitivity and peculiar adjustable specificity.

    Li, Fengqin; Xu, Yanmei; Yu, Xiang; Yu, Zhigang; He, Xunjun; Ji, Hongrui; Dong, Jinghao; Song, Yongbin; Yan, Hong; Zhang, Guiling

    2016-08-15

    One "signal on" electrochemical sensing strategy was constructed for the detection of a specific hepatitis B virus (HBV) gene sequence based on the protection-displacement-hybridization-based (PDHB) signaling mechanism. This sensing system is composed of three probes, one capturing probe (CP) and one assistant probe (AP) which are co-immobilized on the Au electrode surface, and one 3-methylene blue (MB) modified signaling probe (SP) free in the detection solution. One duplex are formed between AP and SP with the target, a specific HBV gene sequence, hybridizing with CP. This structure can drive the MB labels close to the electrode surface, thereby producing a large detection current. Two electrochemical testing techniques, alternating current voltammetry (ACV) and cyclic voltammetry (CV), were used for characterizing the sensor. Under the optimized conditions, the proposed sensor exhibits a high sensitivity with the detection limit of ∼5fM for the target. When used for the discrimination of point mutation, the sensor also features an outstanding ability and its peculiar high adjustability. PMID:27085953

  12. Asian immigrant settlement and adjustment in Australia.

    Khoo, S; Kee, P; Dang, T; Shu, J

    1994-01-01

    "This article provides a broad assessment of the settlement and adjustment of people born in the many countries of Asia who are resident in Australia, based on recently available data from the 1991 Census of Population and Housing. It examines some indicators of economic adjustment such as performance in the labor market, and some indicators of social adjustment, such as acquisition of English language proficiency." PMID:12289777

  13. Model-based Utility Functions

    Hibbard, Bill

    2011-01-01

    At the recent AGI-11 Conference Orseau and Ring, and Dewey, described problems, including self-delusion, with the behavior of AIXI agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. The paper also argues that agents will not choose to modify their utility functions.

  14. PCA-based lung motion model

    Li, Ruijiang; Jia, Xun; Zhao, Tianyu; Lamb, James; Yang, Deshan; Low, Daniel A; Jiang, Steve B

    2010-01-01

    Organ motion induced by respiration may cause clinically significant targeting errors and greatly degrade the effectiveness of conformal radiotherapy. It is therefore crucial to be able to model respiratory motion accurately. A recently proposed lung motion model based on principal component analysis (PCA) has been shown to be promising on a few patients. However, there is still a need to understand the underlying reason why it works. In this paper, we present a much deeper and detailed analysis of the PCA-based lung motion model. We provide the theoretical justification of the effectiveness of PCA in modeling lung motion. We also prove that under certain conditions, the PCA motion model is equivalent to 5D motion model, which is based on physiology and anatomy of the lung. The modeling power of PCA model was tested on clinical data and the average 3D error was found to be below 1 mm.

  15. An acoustical model based monitoring network

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the noi

  16. Model Validation in Ontology Based Transformations

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  17. Agent-based pedestrian modelling

    Batty, Michael

    2003-01-01

    When the focus of interest in geographical systems is at the very fine scale, at the level of streets and buildings for example, movement becomes central to simulations of how spatial activities are used and develop. Recent advances in computing power and the acquisition of fine scale digital data now mean that we are able to attempt to understand and predict such phenomena with the focus in spatial modelling changing to dynamic simulations of the individual and collective beha...

  18. An Agent Based Classification Model

    Gu, Feng; Aickelin, Uwe; Greensmith, Julie

    2009-01-01

    The major function of this model is to access the UCI Wisconsin Breast Can- cer data-set[1] and classify the data items into two categories, which are normal and anomalous. This kind of classifi cation can be referred as anomaly detection, which discriminates anomalous behaviour from normal behaviour in computer systems. One popular solution for anomaly detection is Artifi cial Immune Sys- tems (AIS). AIS are adaptive systems inspired by theoretical immunology and observed immune functions, p...

  19. Model Based Control of Solidification

    Furenes, Beathe

    2009-01-01

    The objective of this thesis is to develop models for use in the control of a solidification process. Solidification is the phase change from liquid to solid, and takes place in many important processes ranging from production engineering to solid-state physics. Often during solidification, undesired e¤ects like e.g. variation of composition, microstructure, etc. occur. The solidification structure and its associated defects often persist throughout the subsequent operations, and thus good co...

  20. Integration of Simulink Models with Component-based Software Models

    Marian, Nicolae

    2008-01-01

    constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has to be......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of...

  1. A Multiple Model Approach to Modeling Based on LPF Algorithm

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  2. Model-based internal wave processing

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  3. Tools for model-based security engineering: models vs. code

    Jürjens, Jan; Yu, Yijun

    2007-01-01

    We present tools to support model-based security engineering on both the model and the code level. In the approach supported by these tools, one firstly specifies the security-critical part of the system (e.g. a crypto protocol) using the UML security extension UMLsec. The models are automatically verified for security properties using automated theorem provers. These are implemented within a framework that supports implementing verification routines, based on XMI output of the diagrams from ...

  4. Modelling a Peroxidase-based Optical Biosensor

    Juozas Kulys; Evelina Gaidamauskait˙e; Romas Baronas

    2007-01-01

    The response of a peroxidase-based optical biosensor was modelled digitally. A mathematical model of the optical biosensor is based on a system of non-linear reaction-diffusion equations. The modelling biosensor comprises two compartments, an enzyme layer and an outer diffusion layer. The digital simulation was carried out using finite difference technique. The influence of the substrate concentration as well as of the thickness of both the enzyme and diffusion layers on the biosensor respons...

  5. Navigation based on symbolic space models

    Baras, Karolina; Moreira, Adriano; Meneses, Filipe

    2010-01-01

    Existing navigation systems are very appropriate for car navigation, but lack support for convenient pedestrian navigation and cannot be used indoors due to GPS limitations. In addition, the creation and the maintenance of the required models are costly and time consuming, and are usually based on proprietary data structures. In this paper we describe a navigation system based on a human inspired symbolic space model. We argue that symbolic space models are much easier...

  6. P-Graph-based Workflow Modelling

    József Tick

    2007-03-01

    Full Text Available Workflow modelling has been successfully introduced and implemented in severalapplication fields. Therefore, its significance has increased dramatically. Several work flowmodelling techniques have been published so far, out of which quite a number arewidespread applications. For instance the Petri-Net-based modelling has become popularpartly due to its graphical design and partly due to its correct mathematical background.The workflow modelling based on Unified Modelling Language is important because of itspractical usage. This paper introduces and examines the workflow modelling techniquebased on the Process-graph as a possible new solution next to the already existingmodelling techniques.

  7. IP Network Management Model Based on NGOSS

    ZHANG Jin-yu; LI Hong-hui; LIU Feng

    2004-01-01

    This paper addresses a management model for IP network based on Next Generation Operation Support System (NGOSS). It makes the network management on the base of all the operation actions of ISP, It provides QoS to user service through the whole path by providing end-to-end Service Level Agreements (SLA) management through whole path. Based on web and coordination technology, this paper gives an implement architecture of this model.

  8. [Improved Response to 5-FU Using Dose Adjustment and Elastomeric Pump Selection Based on Monitoring of the 5-FU Level--A Case Report].

    Muneoka, Katsuki; Shirai, Yoshio; Kanda, Junkichi; Sasaki, Masataka; Wakai, Toshifumi; Wakabayashi, Hiroyuki

    2015-10-01

    A 6 1-year-old man with unresectable multiple hepatic metastases after resection of sigmoid colon carcinoma was treated with irinotecan and infused 5-fluorouracil (5-FU) plus Leucovorin (FOLFIRI). Since the levels of tumor markers increased, the 5-FU dose was increased from 2,700 to 3,000 mg/m2 using a Jackson-type pump and an extended infusion time of 53 hours. The blood level of 5-FU was 507 ng/mL 16 hours after starting the infusion. The pump was then changed to a bottle-type pump with the same dose of 3,000 mg/m2. At 16 hours, the 5-FU level was 964.5 ng/mL. The areas under the concentration vs. time curve (AUC mg・h/L)were 21 and 44 mg・h/L for the Jackson- and bottle-type pumps, respectively. Owing to the development of Grade 3 stomatitis and hand-foot syndrome, 5-FU was reduced to 2,700 mg/m2 with a bottle-type pump. The AUC decreased to 27 mg・h/L, but the liver metastases were reduced and the adverse effects subsided to Grade 1. This case shows that individual dose adjustment of 5-FU to the appropriate AUC based on pharmacokinetic monitoring of the blood 5-FU level can improve the response, reduce adverse effects, and have a clinical benefit. PMID:26489552

  9. Gradient-based model calibration with proxy-model assistance

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  10. Family Adjustment to Aphasia

    ... Public / Speech, Language and Swallowing / Disorders and Diseases Family Adjustment to Aphasia Richard S. was a senior manager ... It also presents a great challenge to the family. There may be tension among family members and ...

  11. Adjustments and Depression

    Full Text Available ... identity and to find a new balance in relationships. The stages of adjustment can include grieving, taking ... treatment options. Related pages What is a complete vs incomplete injury? What emergency procedures occur following an ...

  12. Adjustments and Depression

    Full Text Available ... thoughts and feelings and is something that takes time. The goal of adjusting is to rebuild one's ... updates about our impact > Get the Reeve newsletter International support > Pages in other languages Made with ♡ in ...

  13. Adjustments and Depression

    Full Text Available ... rebuild one's identity and to find a new balance in relationships. The stages of adjustment can include ... a peer mentor Advocate for change Fundraise with Team Reeve Champions Committee Volunteering About Us Our story ...

  14. Image contrast enhancement based on a local standard deviation model

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm

  15. Occupational Adjustment of Immigrants

    Zorlu, Aslan

    2011-01-01

    This paper examines the speed of the occupational adjustment of immigrants using Labour Force Surveys 2004 and 2005 from Statistics Netherlands. The analysis provides new evidence that immigrants start with jobs at the lower levels of skill distribution. Their occupational achievement improves significantly with the duration of residence. The extent of this initial disadvantage and the rate of adjustment vary across immigrant groups according to the transferability of skills associated with t...

  16. KVA: Capital Valuation Adjustment

    Andrew Green; Chris Kenyon

    2014-01-01

    Credit (CVA), Debit (DVA) and Funding Valuation Adjustments (FVA) are now familiar valuation adjustments made to the value of a portfolio of derivatives to account for credit risks and funding costs. However, recent changes in the regulatory regime and the increases in regulatory capital requirements has led many banks to include the cost of capital in derivative pricing. This paper formalises the addition of cost of capital by extending the Burgard-Kjaer (2013) semi-replication approach to C...

  17. 10 CFR 905.34 - Adjustment provisions.

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Adjustment provisions. 905.34 Section 905.34 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT PROGRAM Power Marketing Initiative § 905.34 Adjustment... continue to take place based on existing contract/marketing criteria principles....

  18. Model-based Abstraction of Data Provenance

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  19. 77 FR 21775 - Risk Adjustment Meeting-May 7, 2012 and May 8, 2012

    2012-04-11

    ... following topics: The risk adjustment model, calculation of plan average actuarial risk, calculation of... risk adjustment model, calculation of plan average actuarial risk, calculation of payments and...

  20. Speed of Adjustment to Selected Labour Market and Tax Reforms

    Annabelle Mourougane; Lukas Vogel

    2009-01-01

    This paper examines the length of economic adjustments to selected structural reforms, drawing on simulations with dynamic general equilibrium and macro-economic neo-Keynesian models. Employment adjustment costs appear to have only a limited effect on the pace of adjustment to reforms and the influence of price adjustment costs on output dynamics is found to be marginal. Accommodative monetary policy can speed up the adjustment to a new equilibrium, though to a varying degree in the different...

  1. Agent-based modeling and network dynamics

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  2. Model-Based Clustering of Large Networks

    Vu, Duy Quang; Schweinberger, Michael

    2012-01-01

    We describe a network clustering framework, based on finite mixture models, that can be applied to discrete-valued networks with hundreds of thousands of nodes and billions of edge variables. Relative to other recent model-based clustering work for networks, we introduce a more flexible modeling framework, improve the variational-approximation estimation algorithm, discuss and implement standard error estimation via a parametric bootstrap approach, and apply these methods to much larger datasets than those seen elsewhere in the literature. The more flexible modeling framework is achieved through introducing novel parameterizations of the model, giving varying degrees of parsimony, using exponential family models whose structure may be exploited in various theoretical and algorithmic ways. The algorithms, which we show how to adapt to the more complicated optimization requirements introduced by the constraints imposed by the novel parameterizations we propose, are based on variational generalized EM algorithms...

  3. Simulation-based Manufacturing System Modeling

    卫东; 金烨; 范秀敏; 严隽琪

    2003-01-01

    In recent years, computer simulation appears to be very advantageous technique for researching the resource-constrained manufacturing system. This paper presents an object-oriented simulation modeling method, which combines the merits of traditional methods such as IDEF0 and Petri Net. In this paper, a four-layer-one-angel hierarchical modeling framework based on OOP is defined. And the modeling description of these layers is expounded, such as: hybrid production control modeling and human resource dispatch modeling. To validate the modeling method, a case study of an auto-product line in a motor manufacturing company has been carried out.

  4. CONTEXT BASED ACCESS CONTROL MODEL FOR PROTECTING PERVASIVE ENVIRONMENT

    V. Nirmalrani

    2014-04-01

    Full Text Available -In Pervasive Computing, access control is a critical issue which gives many opportunities for users to access and share the resources anytime and anywhere in a more easiest way. Pervasive Computing Environments are heterogeneous and dynamic sensor-rich environments characterized by frequent and unpredictable changes on users, resources, and environment situations. These environments call the access control solutions that allow dynamic adjustments of access permissions based on information describing the conditions of these entities (context, such as location and time. Some existing models attempt to identify context information which is used as an optional attribute for limiting the scope of access control permissions. However, these approaches normally exploit identities and roles dynamically assigned to the users in order to grant access permissions, which is an inappropriate solution for open and dynamic environments. Those environments cannot assume the existence of predefined roles and user-role associations. Hence the access permissions are claimed and assigned to the users only based on context information, which characterizing the three most important entities of any access control framework: owners, requestors, and resources. Thus, this paper proposes a generalized context-based access control model for making access control decisions completely based on context information, offering seven types of context-based access control policies. The proposed model also takes into account the privacy requirements when enforcing access control policies, such as the support to purposes and obligations. In addition this paper proposes the integration of mechanism to detect / resolve dynamic and static conflict on context-based access control policies.

  5. Energy based prediction models for building acoustics

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed on...

  6. Model-based reasoning and large-knowledge bases

    In such engineering fields as nuclear power plant engineering, technical information expressed in the form of schematics is frequently used. A new paradigm for model-based reasoning (MBR) and an AI tool called PLEXSYS (plant expert system) using this paradigm has been developed. PLEXSYS and the underlying paradigm are specifically designed to handle schematic drawings, by expressing drawings as models and supporting various sophisticated searches on these models. Two application systems have been constructed with PLEXSYS: one generates PLEXSYS models from existing CAD data files, and the other provides functions for nuclear power plant design support. Since the models can be generated from existing data resources, the design support system automatically has full access to a large-scale model or knowledge base representing actual nuclear power plants. (author)

  7. Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster

    Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song

    2015-02-01

    The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.

  8. Comparison Method - Preference Of Adjustment Techniques Among Valuers

    Anuar Alias; Noor Hana Asyikin Nor Hanapi

    2010-01-01

    This paper discusses the adjustment techniques applied by valuers in determining the market value of the property. There are several types of adjustment techniques that can be applied in comparison method such as summative percentage, dollar percentage, add and/or subtract percentage, and proper base adjustments. In order to investigate the most preferred adjustment techniques applied by valuers in Malaysia as well as the elements of adjustment, a questionnaire survey is conducted that involv...

  9. Ground-Based Telescope Parametric Cost Model

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  10. Focal Adjustment for Star Tracker

    Liu Hai-bo; Jian-kun Yang; Ji-chun Tan; De-zhi Su; Wen-liang Wang; Xiu-jian Li; Hui Jia

    2010-01-01

    Technique of measuring intensity distribution and size of spot image developed has been discussed, which is especially suitable for defocus adjustment in ground test of star tracker. A novel approach for choosing a proper defocusing position has been proposed based on collimator, Gaussian surface fitting method, and other ordinary instruments. It proves to be practical and adequate in the development of distant object tracking such as star tracker.Defence Science Journal, 2010, 60(6), pp.678-...

  11. Software Testing Method Based on Model Comparison

    XIE Xiao-dong; LU Yan-sheng; MAO Cheng-yin

    2008-01-01

    A model comparison based software testing method (MCST) is proposed. In this method, the requirements and programs of software under test are transformed into the ones in the same form, and described by the same model describe language (MDL).Then, the requirements are transformed into a specification model and the programs into an implementation model. Thus, the elements and structures of the two models are compared, and the differences between them are obtained. Based on the diffrences, a test suite is generated. Different MDLs can be chosen for the software under test. The usages of two classical MDLs in MCST, the equivalence classes model and the extended finite state machine (EFSM) model, are described with example applications. The results show that the test suites generated by MCST are more efficient and smaller than some other testing methods, such as the path-coverage testing method, the object state diagram testing method, etc.

  12. Sketch-based Interfaces and Modeling

    Jorge, Joaquim

    2011-01-01

    The field of sketch-based interfaces and modeling (SBIM) is concerned with developing methods and techniques to enable users to interact with a computer through sketching - a simple, yet highly expressive medium. SBIM blends concepts from computer graphics, human-computer interaction, artificial intelligence, and machine learning. Recent improvements in hardware, coupled with new machine learning techniques for more accurate recognition, and more robust depth inferencing techniques for sketch-based modeling, have resulted in an explosion of both sketch-based interfaces and pen-based computing

  13. Location-based Modeling and Analysis: Tropos-based Approach

    Ali, Raian; Dalpiaz, Fabiano; Giorgini, Paolo

    2008-01-01

    The continuous growth of interest in mobile applications makes the concept of location essential to design and develop software systems. Location-based software is supposed to be able to monitor the location and choose accordingly the most appropriate behavior. In this paper, we propose a novel conceptual framework to model and analyze location-based software. We mainly focus on the social facets of locations adopting concepts such as social actor, resource, and location-based behavior. Our a...

  14. Adjustment as process and outcome: Measuring adjustment to HIV in Uganda.

    Martin, Faith; Russell, Steve; Seeley, Janet

    2016-05-01

    'Adjustment' in health refers to both processes and outcomes. Its measurement and conceptualisation in African cultures is limited. In total, 263 people living with HIV and receiving anti-retroviral therapy in clinics in Uganda completed a translated Mental Adjustment to HIV Scale, depression items from the Hopkins checklist and demographic questions. Factor analysis revealed four Mental Adjustment to HIV factors of active coping, cognitive-social adjustment, hopelessness and denial/avoidance. Correlations with depression supported the Mental Adjustment to HIV's validity and the importance of active adjustment, while the role of cognitive adjustment was unclear. Factors were process or outcome focussed, suggesting a need for theory-based measures in general. PMID:25030794

  15. 煤炭资源税调整测算模型及其效应研究%Analysis on Model and Effects of Coal Resource Tax Adjustment

    郭菊娥; 钱冬; 吕振东; 熊洁

    2011-01-01

    煤炭资源税调整是实现节能减排、推动低碳经济重要的价格调控手段,全球金融危机与我国经济回暖给煤炭资源税调整提供了有利时机.本文基于我国实际情况构建了中国能源CGE模型,测算了从价征收不同煤炭资源税税率的影响效应,结果表明:随着煤炭资源税税率的逐步提升,煤炭需求量减少幅度大干GDP减少幅度,能有效降低单位GDP煤耗.并且对CPI的影响较小.从行业影响来看,价格受影响最大的是电力、热力的生产和供应业,燃气生产和供应业;产出受影响最大的行业依次是燃气生产和供应业,电力、热力的生产和供应业,金属冶炼及压延加工业,化学工业四个行业.受影响较大的行业均是节能减排重点行业.建议政府应将目前的资源税费合并,成为资源税从价计征,提高税率,并对资源税实施专款专用,提高能源利用效率.并根据测算结果提出煤炭资源税调整与补贴投资相结合的政策建议.%Coal resource tax adjustment is an important measure of price regulation for realizing energy-saving and promoting low carbon economy.The global financial crisis and the economic recovery in China are opportunities to adjust coal resource tax.The paper constructs the energy CGE model of China, and then calculates the influence effects of different rates of ad valorem of coal resource tax.The results show that with the coal resource tax rate rising gradually, the CPI climbs slowly, and coal demand reduction is greater than the GDP reduction, which means unit GDP coal consumption reduces.As to impacts of coal resource rate on industry, the industries that have the greatest price shock are electricity, heat production and supply industry and gas production and supply industry; outputs of the four industries influenced greatly are gas production and supply industry, electricity, heat production and supply industry,smelting and pressing of ferrous metals industry and

  16. Model-based clustered-dot screening

    Kim, Sang Ho

    2006-01-01

    I propose a halftone screen design method based on a human visual system model and the characteristics of the electro-photographic (EP) printer engine. Generally, screen design methods based on human visual models produce dispersed-dot type screens while design methods considering EP printer characteristics generate clustered-dot type screens. In this paper, I propose a cost function balancing the conflicting characteristics of the human visual system and the printer. By minimizing the obtained cost function, I design a model-based clustered-dot screen using a modified direct binary search algorithm. Experimental results demonstrate the superior quality of the model-based clustered-dot screen compared to a conventional clustered-dot screen.

  17. Seasonal adjustment methods and real time trend-cycle estimation

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  18. Multiscale agent-based consumer market modeling.

    North, M. J.; Macal, C. M.; St. Aubin, J.; Thimmapuram, P.; Bragen, M.; Hahn, J.; Karr, J.; Brigham, N.; Lacy, M. E.; Hampton, D.; Decision and Information Sciences; Procter & Gamble Co.

    2010-05-01

    Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression-based models, logit models, and theoretical market-level models, such as the NBD-Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method - agent-based modeling - shows promise for addressing these issues. Agent-based models use business-driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system-level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent-based modeling to develop a multi-scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings.

  19. Melatonin adjusts the expression pattern of clock genes in the suprachiasmatic nucleus and induces antidepressant-like effect in a mouse model of seasonal affective disorder.

    Nagy, Andras David; Iwamoto, Ayaka; Kawai, Misato; Goda, Ryosei; Matsuo, Haruka; Otsuka, Tsuyoshi; Nagasawa, Mao; Furuse, Mitsuhiro; Yasuo, Shinobu

    2015-05-01

    Recently, we have shown that C57BL/6J mice exhibit depression-like behavior under short photoperiod and suggested them as an animal model for investigating seasonal affective disorder (SAD). In this study, we tested if manipulations of the circadian clock with melatonin treatment could effectively modify depression-like and anxiety-like behaviors and brain serotonergic system in C57BL/6J mice. Under short photoperiods (8-h light/16-h dark), daily melatonin treatments 2 h before light offset have significantly altered the 24-h patterns of mRNA expression of circadian clock genes (per1, per2, bmal1 and clock) within the suprachiasmatic nuclei (SCN) mostly by increasing amplitude in their expressional rhythms without inducing robust phase shifts in them. Melatonin treatments altered the expression of genes of serotonergic neurotransmission in the dorsal raphe (tph2, sert, vmat2 and 5ht1a) and serotonin contents in the amygdala. Importantly, melatonin treatment reduced the immobility in forced swim test, a depression-like behavior. As a key mechanism of melatonin-induced antidepressant-like effect, the previously proposed phase-advance hypothesis of the circadian clock could not be confirmed under conditions of our experiment. However, our findings of modest adjustments in both the amplitude and phase of the transcriptional oscillators in the SCN as a result of melatonin treatments may be sufficient to associate with the effects seen in the brain serotonergic system and with the improvement in depression-like behavior. Our study confirmed a predictive validity of C57BL/6J mice as a useful model for the molecular analysis of links between the clock and brain serotonergic system, which could greatly accelerate our understanding of the pathogenesis of SAD, as well as the search for new treatments. PMID:25515595

  20. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.