Dynamics of expert adjustment to model-based forecast
Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)
2007-01-01
textabstractExperts often add domain knowledge to model-based forecasts while aiming to reduce forecast errors. Indeed, there is some empirical evidence that expert-adjusted forecasts improve forecast quality. However, surprisingly little is known about what experts actually do. Based on a large and
Adjustment Criterion and Algorithm in Adjustment Model with Uncertain
SONG Yingchun
2015-02-01
Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.
Stephen Medeiros
2015-03-01
Full Text Available Digital elevation models (DEMs derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red, an interferometric SAR (IfSAR digital surface model, and lidar-derived canopy height to classify biomass density using both a three- class scheme (high, medium and low and a two-class scheme (high and low. Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.
Economic analysis of coal price-electricity price adjustment in China based on the CGE model
In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence.
Economic analysis of coal price. Electricity price adjustment in China based on the CGE model
In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)
Economic analysis of coal price-electricity price adjustment in China based on the CGE model
Y.X. He; S.L. Zhang; L.Y. Yang; Y.J. Wang; J. Wang [North China Electric Power University, Beijing (China). School of Business Administration
2010-11-15
In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. 16 refs., 3 figs., 7 tabs.
Economic analysis of coal price. Electricity price adjustment in China based on the CGE model
He, Y.X.; Yang, L.Y.; Wang, Y.J.; Wang, J. [School of Business Administration, North China Electric Power University, Zhu Xin Zhuang, Bei Nong Lu No. 2, Changping District, Beijing (China); Zhang, S.L. [Finance Department, Nanning Power Supply Bureau, Xingguang Street No. 43, Nanning, Guangxi Autonomous Region (China)
2010-11-15
In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)
Multi-Period Model of Portfolio Investment and Adjustment Based on Hybrid Genetic Algorithm
RONG Ximin; LU Meiping; DENG Lin
2009-01-01
This paper proposes a multi-period portfolio investment model with class constraints, transaction cost, and indivisible securities. When an investor joins the securities market for the first time, he should decide on portfolio investment based on the practical conditions of securities market. In addition, investors should adjust the portfolio according to market changes, changing or not changing the category of risky securities. Markowitz mean-variance approach is applied to the multi-period portfolio selection problems. Because the sub-models are optimal mixed integer program, whose objective function is not unimodal and feasible set is with a particular structure, traditional optimization method usually fails to find a globally optimal solution. So this paper employs the hybrid genetic algorithm to solve the problem. Investment policies that accord with finance market and are easy to operate for investors are put forward with an illustration of application.
ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS
Krasil'nikov Vitaliy Mikhaylovich
2012-10-01
Full Text Available The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued in respect of the «Application of water resource regulations governing the operation of waterworks facilities of power plants». The technology described there is obsolete due to the availability of specialized software. The computer technique is based on a digital terrain model. The authors provide an overview of the technique implemented at Rybinsk and Gorkiy water basins in this article. Thus, the digital terrain model generated on the basis of the field data is used at Gorkiy water basin, while the model based on maps and charts is applied at Rybinsk water basin. The authors believe that the software technique can be applied to any other water basin on the basis of the analysis and comparison of morphometric characteristics of the two water basins.
Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan
2015-01-01
This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.
ADJUSTMENT OF MORPHOMETRIC PARAMETERS OF WATER BASINS BASED ON DIGITAL TERRAIN MODELS
Krasil'nikov Vitaliy Mikhaylovich; Sobol' Il'ya Stanislavovich
2012-01-01
The authors argue that effective use of water resources requires accurate morphometric characteristics of water basins. Accurate parameters are needed to analyze their condition, and to assure their appropriate control and operation. Today multiple water basins need their morphometric characteristics to be adjusted and properly stored. The procedure employed so far is based on plane geometric horizontals depicted onto topographic maps. It is described in the procedural guidelines issued i...
In this Letter, a kind of novel model, called the generalized Takagi-Sugeno (T-S) fuzzy model, is first developed by extending the conventional T-S fuzzy model. Then, a simple but efficient method to control fractional order chaotic systems is proposed using the generalized T-S fuzzy model and adaptive adjustment mechanism (AAM). Sufficient conditions are derived to guarantee chaos control from the stability criterion of linear fractional order systems. The proposed approach offers a systematic design procedure for stabilizing a large class of fractional order chaotic systems from the literature about chaos research. The effectiveness of the approach is tested on fractional order Roessler system and fractional order Lorenz system.
Bazzazian, S; Besharat, M A
2012-01-01
The aim of this study was to develop and test a model of adjustment to type I diabetes. Three hundred young adults (172 females and 128 males) with type I diabetes were asked to complete the Adult Attachment Inventory (AAI), the Brief Illness Perception Questionnaire (Brief IPQ), Task-oriented subscale of the Coping Inventory for Stressful Situations (CISS), D-39, and well-being subscale of the Mental Health Inventory (MHI). HbA1c was obtained from laboratory examination. Results from structural equation analysis partly supported the hypothesized model. Secure and avoidant attachment styles were found to have effects on illness perception, ambivalent attachment style did not have significant effect on illness perception. Three attachment styles had significant effect on task-oriented coping strategy. Avoidant attachment had negative direct effect on adjustment too. Regression effects of illness perception and task-oriented coping strategy on adjustment were positive. Therefore, positive illness perception and more usage of task-oriented coping strategy predict better adjustment to diabetes. So, the results confirmed the theoretical bases and empirical evidence of effectiveness of attachment styles in adjustment to chronic disease and can be helpful in devising preventive policies, determining high-risk maladjusted patients, and planning special psychological treatment. PMID:21678193
Experts' adjustment to model-based forecasts: Does the forecast horizon matter?
Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)
2007-01-01
textabstractExperts may have domain-specific knowledge that is not included in a statistical model and that can improve forecasts. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional me
Does experts' adjustment to model-based forecasts contribute to forecast quality?
Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)
2007-01-01
textabstractWe perform a large-scale empirical analysis of the question whether model-based forecasts can be improved by adding expert knowledge. We consider a huge database of a pharmaceutical company where the head office uses a statistical model to generate monthly sales forecasts at various hori
Adjustment or updating of models
D J Ewins
2000-06-01
In this paper, first a review of the terminology used in the model adjustment or updating is presented. This is followed by an outline of the major updating algorithms cuurently available, together with a discussion of the advantages and disadvantages of each, and the current state-of-the-art of this important application and part of optimum design technology.
Bulk Density Adjustment of Resin-Based Equivalent Material for Geomechanical Model Test
Pengxian Fan; Haozhe Xing; Linjian Ma; Kaifeng Jiang; Mingyang Wang; Zechen Yan; Xiang Fang
2015-01-01
An equivalent material is of significance to the simulation of prototype rock in geomechanical model test. Researchers attempt to ensure that the bulk density of equivalent material is equal to that of prototype rock. In this work, barite sand was used to increase the bulk density of a resin-based equivalent material. The variation law of the bulk density was revealed in the simulation of a prototype rock of a different bulk density. Over 300 specimens were made for uniaxial compression test....
Overpaying morbidity adjusters in risk equalization models.
van Kleef, R C; van Vliet, R C J A; van de Ven, W P M M
2016-09-01
Most competitive social health insurance markets include risk equalization to compensate insurers for predictable variation in healthcare expenses. Empirical literature shows that even the most sophisticated risk equalization models-with advanced morbidity adjusters-substantially undercompensate insurers for selected groups of high-risk individuals. In the presence of premium regulation, these undercompensations confront consumers and insurers with incentives for risk selection. An important reason for the undercompensations is that not all information with predictive value regarding healthcare expenses is appropriate for use as a morbidity adjuster. To reduce incentives for selection regarding specific groups we propose overpaying morbidity adjusters that are already included in the risk equalization model. This paper illustrates the idea of overpaying by merging data on morbidity adjusters and healthcare expenses with health survey information, and derives three preconditions for meaningful application. Given these preconditions, we think overpaying may be particularly useful for pharmacy-based cost groups. PMID:26420555
Anchoring Adjusted Capital Asset Pricing Model
Hammad, Siddiqi
2015-01-01
An anchoring adjusted Capital Asset Pricing Model (ACAPM) is developed in which the payoff volatilities of well-established stocks are used as starting points that are adjusted to form volatility judgments about other stocks. Anchoring heuristic implies that such adjustments are typically insufficient. ACAPM converges to CAPM with correct adjustment, so CAPM is a special case of ACAPM. The model provides a unified explanation for the size, value, and momentum effects in the stock market. A ke...
R. F. Marcolla
2009-03-01
Full Text Available In this work a strategy is presented for the temperature control of the polymerization reaction of styrene in suspension in batch. A three-layer feed forward Artificial Neural Network was trained in an off-line way starting from a removed group of patterns of the experimental system and applied in the recurrent form (RNN to a Predictive Controller based on a Nonlinear Model (NMPC. This controller presented very superior results to the classic controller PID in the maintenance of the temperature. Still to improve the performance of the model used by NMPC (RNN that can present differences in relation to the system due to the dead time involved in the control actions, nonlinear characteristic of the system and variable dynamics; an on-line adjustment methodology of the parameters of the exit layer of the Network is implemented, presenting superior results and treating the difficulties satisfactorily in the temperature control. All the presented results are obtained for a real system.
Effect of Flux Adjustments on Temperature Variability in Climate Models
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
XIA Shenzhen; KE Changqing; ZHOU Xiaobing; ZHANG Jie
2016-01-01
Thein situ sea surface salinity (SSS) measurements from a scientific cruise to the western zone of the southeast Indian Ocean covering 30°-60°S, 80°-120°E are used to assess the SSS retrieved from Aquarius (Aquarius SSS). Wind speed and sea surface temperature (SST) affect the SSS estimates based on passive microwave radiation within the mid- to low-latitude southeast Indian Ocean. The relationships among thein situ, Aquarius SSS and wind-SST corrections are used to adjust the Aquarius SSS. The adjusted Aquarius SSS are compared with the SSS data from MyOcean model. Results show that: (1) Before adjustment: compared with MyOcean SSS, the Aquarius SSS in most of the sea areas is higher; but lower in the low-temperature sea areas located at the south of 55°S and west of 98°E. The Aquarius SSS is generally higher by 0.42 on average for the southeast Indian Ocean. (2) After adjustment: the adjustment greatly counteracts the impact of high wind speeds and improves the overall accuracy of the retrieved salinity (the mean absolute error of the Zonal mean is improved by 0.06, and the mean error is -0.05 compared with MyOcean SSS). Near the latitude 42°S, the adjusted SSS is well consistent with the MyOcean and the difference is approximately 0.004.
Borup, Morten; Grum, Morten; Linde, Jens Jørgen;
2016-01-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling......, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10–20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2–3 km away....
BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION
Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning
2004-01-01
The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.
Adjustment method of parameters intended for first-principle models
P. Czop
2012-12-01
Full Text Available Purpose: This paper demonstrates a process of estimation phenomenological parameters of a first-principle nonlinear model based on the hydraulic damper system.Design/methodology/approach: First-principle (FP models are formulated using a system of continuous ordinary differential equations capturing usually nonlinear relations among variables of the model. The considering model applies three categories of parameters: geometrical, physical and phenomenological. Geometrical and physical parameters are deduced from construction or operational documentation. The phenomenological parameters are the adjustable ones, which are estimated or adjusted based on their roughly known values, e.g. friction/damping coefficients. Findings: A phenomenological parameter, friction coefficient, was successfully estimated based on the experimental data. The error between the model response and experimental data is not greater than 10%.Research limitations/implications: Adjusting a model to data is, in most cases, a non-convex optimization problem and the criterion function may have several local minima. This is a case when multiple parameters are simultaneously estimated. Practical implications: First-principle models are fundamental tools for understanding, optimizing, designing, and diagnosing technical systems since they are updatable using operational measurements.Originality/value: First-principle models are frequently adjusted by trial-and-error, which can lead to nonoptimal results. In order to avoid deficiencies of the trial-and-error approach, a formalized mathematical method using optimization techniques to minimize the error criterion, and find optimal values of tunable model parameters, was proposed and demonstrated in this work.
OPEC model : adjustment or new model
Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)
R.M. Solow Adjusted Model of Economic Growth
Ion Gh. Rosca
2007-05-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Ting-gui JIA; Shu-gang WANG; Guo-na QU; Jian LIU
2013-01-01
Ventilation characteristic parameters are the base of ventilation network solution; however,they are apt to be affected by operating errors,reading errors,airflow stability,and other factors,and it is difficult to obtain accurate results.In order to check the ventilation characteristic parameters of mines more accurately,the integrated method of circuit and path is adopted to overcome the drawbacks caused by the traditional path method or circuit method in the digital debugging process of ventilation system,which can improve the large local error or the inconsistency between the airflow direction and the actual situation caused by inaccuracy of the ventilation characteristic parameters or checking in the ventilation network solution.The results show that this method can effectively reduce the local error and prevent the pseudo-airflow reversal phenomenon; in addition,the solution results are consistent with the actual situation of mines,and the effect is obvious.
An adjustment cost model of distributional dynamics.
Getachew, Yoseph; Basu, Parantap
2012-01-01
We analyze the distributional eÂ¤ects of adjustment cost in an environment with incomplete capital market. We find that a higher adjustment cost for human capital acquisition slows down the intergenerational mobility and results in a persistent inequality across generations. A low depreciation cost of human capital contributes to longer life of the capital which could elevate this adjustment cost and hence contribute to this inequality persistence. A lower total factor productivity could hurt...
Rackl, Robert; Weston, Adam
2005-01-01
The literature on turbulent boundary layer pressure fluctuations provides several empirical models which were compared to the measured TU-144 data. The Efimtsov model showed the best agreement. Adjustments were made to improve its agreement further, consisting of the addition of a broad band peak in the mid frequencies, and a minor modification to the high frequency rolloff. The adjusted Efimtsov predicted and measured results are compared for both subsonic and supersonic flight conditions. Measurements in the forward and middle portions of the fuselage have better agreement with the model than those from the aft portion. For High Speed Civil Transport supersonic cruise, interior levels predicted by use of this model are expected to increase by 1-3 dB due to the adjustments to the Efimtsov model. The space-time cross-correlations and cross-spectra of the fluctuating surface pressure were also investigated. This analysis is an important ingredient in structural acoustic models of aircraft interior noise. Once again the measured data were compared to the predicted levels from the Efimtsov model.
Capital Asset Pricing Model Adjusted for Anchoring
Hammad, Siddiqi
2015-01-01
I show that adjusting CAPM for anchoring provides a unified explanation for the size, value, and momentum effects. Anchoring adjusted CAPM (ACAPM) predicts that stock splits are associated with positive abnormal returns and an increase in return volatility, whereas the reverse stock-splits are associated with negative abnormal returns and a fall in return volatility. Existing empirical evidence strongly supports these predictions. Anchoring has the effect of pushing up the equity premium, a ...
Nadia Stoicuţa; Ana Maria Giurgiulescu; Olimpiu Stoicuţa
2009-01-01
The paper presents a model of economic adjustment Romania’s GDP using econometric model which has the budgetary input and output as Romania's GDP. Adjustment shall be based on a square linear regulator by type discrete.
Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.
2016-03-01
The thickness and equivalent global sea-level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea-level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5 G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30% decrease and an average ˜20-25% increase to the load thickness relative to the ICE-5 G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5 G, and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5 G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6 G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea-level equivalent compared to ICE-5 G at LGM.
Simon, K. M.; James, T. S.; Henton, J. A.; Dyke, A. S.
2016-06-01
The thickness and equivalent global sea level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5G model by holding the timing history constant and iteratively adjusting the thickness history, in four regions of northern Canada. In the final model, the last glacial maximum (LGM) thickness of the Laurentide Ice Sheet west of Hudson Bay was ˜3.4-3.6 km. Conversely, east of Hudson Bay, peak ice thicknesses reached ˜4 km. The ice model thicknesses inferred for these two regions represent, respectively, a ˜30 per cent decrease and an average ˜20-25 per cent increase to the load thickness relative to the ICE-5G reconstruction, which is generally consistent with other recent studies that have focussed on Laurentide Ice Sheet history. The final model also features peak ice thicknesses of 1.2-1.3 km in the Baffin Island region, a modest reduction relative to ICE-5G and unchanged thicknesses for a region in the central Canadian Arctic Archipelago west of Baffin Island. Vertical land motion predictions of the final model fit observed crustal uplift rates well, after an adjustment is made for the elastic crustal response to present-day ice mass changes of regional ice cover. The new Laur16 model provides more than a factor of two improvement of the fit to the RSL data (χ2 measure of misfit) and a factor of nine improvement to the fit of the GPS data (mean squared error measure of fit), compared to the ICE-5G starting model. Laur16 also fits the regional RSL data better by a factor of two and gives a slightly better fit to GPS uplift rates than the recent ICE-6G model. The volume history of the Laur16 reconstruction corresponds to an up to 8 m reduction in global sea level equivalent compared to ICE-5G at LGM.
Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G;
2015-01-01
AIMS: Recent European guidelines recommend to include high-density lipoprotein (HDL) cholesterol in risk assessment for primary prevention of cardiovascular disease (CVD), using a SCORE-based risk model (SCORE-HDL). We compared the predictive performance of SCORE-HDL with SCORE in an independent...... with SCORE, but deteriorated risk classification based on NRI. Future guidelines should consider lower decision thresholds and prioritize CVD morbidity and people above age 65....
Bayes linear covariance matrix adjustment for multivariate dynamic linear models
Wilkinson, Darren J
2008-01-01
A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.
Constructing stochastic models from deterministic process equations by propensity adjustment
Wu Jialiang
2011-11-01
Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic
Phenomenological Quark Mass Matrix Model with Two Adjustable Parameters
Koide, Yoshio
1993-01-01
A phenomenological quark mass matrix model which includes only two adjustable parameters is proposed from the point of view of the unification of quark and lepton mass matrices. The model can provide reasonable values of quark mass ratios and Kobayashi-Maskawa matrix parameters.
Storm Water Management Model Climate Adjustment Tool (SWMM-CAT)
The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...
Svensson, Elin M.; Aweeka, Francesca; Park, Jeong-Gun; Marzan, Florence; Dooley, Kelly E; Karlsson, Mats O
2013-01-01
Safe, effective concomitant treatment regimens for tuberculosis (TB) and HIV infection are urgently needed. Bedaquiline (BDQ) is a promising new anti-TB drug, and efavirenz (EFV) is a commonly used antiretroviral. Due to EFV's induction of cytochrome P450 3A4, the metabolic enzyme responsible for BDQ biotransformation, the drugs are expected to interact. Based on data from a phase I, single-dose pharmacokinetic study, a nonlinear mixed-effects model characterizing BDQ pharmacokinetics and int...
PERMINTAAN BERAS DI PROVINSI JAMBI (Penerapan Partial Adjustment Model
Wasi Riyanto
2013-07-01
Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice.
An adjusted location model for SuperDARN backscatter echoes
E. X. Liu
2012-12-01
Full Text Available The radars that form the Super Dual Auroral Radar Network (SuperDARN receive scatter from ionospheric irregularities in both the E- and F-regions, as well as the Earth's surface, either ground or sea. For ionospheric scatter, the current SuperDARN standard software considers a straight-line propagation from the radar to the scattering zone with an altitude assigned by a standard height model. The knowledge of the group delay to a scatter volume is not sufficient for an exact determination of the location of the irregularities. In this study, the difference between the locations of the backscatter echoes determined by SuperDARN standard software and by ray tracing has been evaluated, using the ionosonde data collected at Sodankylä, which is in the field-of-view of Hankasalmi SuperDARN radar. By studying elevation angle information of backscattered echoes from the data sets of Hankasalmi radar in 2008, we have proposed an adjusted fitting location model determined by slant range and elevation angle. To test the reliability of the adjusted model, an independent data set is selected in 2009. The result shows that the difference between the adjusted model and the ray tracing is significantly reduced and the adjusted model could provide a more accurate location for backscatter targets.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
Automatic Adjustment of Wide-Base Google Street View Panoramas
Boussias-Alexakis, E.; Tsironisa, V.; Petsa, E.; Karras, G.
2016-06-01
This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection) is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.
Permintaan Beras di Provinsi Jambi (Penerapan Partial Adjustment Model
Wasi Riyanto
2015-04-01
Full Text Available The purpose of this study is to determine the effect of price of rice, flour prices, population, income of population and demand of rice for a year earlier on rice demand, demand rice elasticity and rice demand prediction in Jambi Province. This study uses secondary data, including time series data for 22 years from 1988 until 2009. The study used some variables, consist of rice demand (Qdt, the price of rice (Hb, the price of wheat flour (Hg, population (Jp, the income of the population (PDRB and demand for rice the previous year (Qdt-1. The make of this study are multiple regression and dynamic analysis a Partial Adjustment Model, where the demand for rice is the dependent variable and the price of rice, flour prices, population, income population and demand of rice last year was the independent variable. Partial Adjustment Model analysis results showed that the effect of changes in prices of rice and flour are not significant to changes in demand for rice. The population and demand of rice the previous year has positive and significant impact on demand for rice, while revenues have negative and significant population of rice demand. Variable price of rice, earning population and the price of flour is inelastic the demand of rice, because rice is not a normal good but as a necessity so that there is no substitution of goods (replacement of rice with other commodities in Jambi Province. Based on the analysis, it is recommended to the government to be able to control the rate of population increase given the variable number of people as one of the factors that affect demand for rice.It is expected that the government also began to socialize in a lifestyle of non-rice food consumption to control the increasing amount of demand for rice. Last suggestion, the government developed a diversification of staple foods other than rice. Keywords: Demand, Rice, Income Population
Thickness and Shape Synthetical Adjustment for DC Mill Based on Dynamic Nerve-Fuzzy Control
JIA Chun-yu; WANG Ying-rui; ZHOU Hui-feng
2004-01-01
Due to the complexity of thickness and shape synthetical adjustment system and the difficulties to build a mathematical model, a thickness and shape synthetical adjustment scheme on DC mill based on dynamic nerve-fuzzy control was put forward, and a self-organizing fuzzy control model was established. The structure of the network can be optimized dynamically. In the course of studying, the network can automatically adjust its structure based on the specific questions and make its structure the optimal. The input and output of the network are fuzzy sets, and the trained network can complete the composite relation, the fuzzy inference. For decreasing the off-line training time of BP network, the fuzzy sets are encoded. The simulation results indicate that the self-organizing fuzzy control based on dynamic neural network is better than traditional decoupling PID control.
孟生旺; 李政宵
2015-01-01
在非寿险精算中，对保单的累积损失进行预测是费率厘定的基础。在对累积损失进行预测时通常使用T w eedie回归模型。当损失观察数据中包含大量零索赔的保单时，T w eedie回归模型对零点的拟合容易出现偏差；若用零调整分布代替T w eedie分布，并在模型中引入连续型解释变量的平方函数，可以建立零调整回归模型；如果在零调整回归模型中将水平数较多的分类解释变量作为随机效应处理，可以进一步改善预测结果的合理性。基于一组机动车辆第三者责任保险的损失数据，将不同分布假设下的固定效应模型与随机效应模型进行对比，实证检验了随机效应零调整回归模型在保险损失预测中的优越性。%In classification ratemaking of general insurance ,the insurance company mainly focuses on predicting the aggregate claim losses of polices . The main method of predicting aggregate loss is establishing Tweedie regression model .However ,Tweedie regression model may produce large deviations w hen predicting zero‐claim numbers ,as the zero‐claim has a very high probability ,far greater than the probability at zero in Tweedie distribution .Based on the assumption that aggregate insurance loss follows zero‐adjusted distribution ,zero‐adjusted regression model can be established .If a categorical variable that contains too many levels is treated as random effect ,and also introduces quadratic function of continuous variables in the regression ,the accuracy of prediction can be further improved .Based on the empirical study of motor third‐party liability insurance loss ,several regression models with random or fixed effects are compared under different distributions ,the empirical results shows that zero‐adjusted random‐effect regression models have superiority in predicting insurance loss .
Variance-based fingerprint distance adjustment algorithm for indoor localization
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Eighty years of observations on the adjusted monetary base: 1918-1997
Richard G. Anderson; Robert H. Rasche
1999-01-01
Recent trends in empirical macroeconomic research - embedding long-run relationships in models via cointegration, modeling the correlation between seasonal cycles and business cycles, building endogenous growth models, and the interest of policymakers in inflation targeting - have increased the importance of long-time series of macroeconomic data. Among the more important of such data are quantitative measures of monetary policy, such as the adjusted monetary base. Previously published data f...
Siccardi, Marco; Olagunju, Adeniyi; Seden, Kay; Ebrahimjee, Farid; Rannard, Steve; Back, David; Owen, Andrew
2013-01-01
Purpose To treat malaria, HIV-infected patients normally receive artemether (80 mg twice daily) concurrently with antiretroviral therapy and drug-drug interactions can potentially occur. Artemether is a substrate of CYP3A4 and CYP2B6, antiretrovirals such as efavirenz induce these enzymes and have the potential to reduce artemether pharmacokinetic exposure. The aim of this study was to develop an in vitro in vivo extrapolation (IVIVE) approach to model the interaction between efavirenz and ar...
Capacitance-Based Frequency Adjustment of Micro Piezoelectric Vibration Generator
Xinhua Mao
2014-01-01
Full Text Available Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.
A Four-Part Model of Autonomy during Emerging Adulthood: Associations with Adjustment
Lamborn, Susie D.; Groh, Kelly
2009-01-01
We found support for a four-part model of autonomy that links connectedness, separation, detachment, and agency to adjustment during emerging adulthood. Based on self-report surveys of 285 American college students, expected associations among the autonomy variables were found. In addition, agency, as measured by self-reliance, predicted lower…
Attar-Schwartz, Shalhevet
2015-09-01
Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. PMID:26237053
Adjusting a cancer mortality-prediction model for disease status-related eligibility criteria
Kimmel Marek
2011-05-01
Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.
黄日胜; 黄锡波
2015-01-01
We presented an improved particle swarm optimisation (PSO)algorithm with the acceleration parameters adjusted according to the individual fitness value,which is used to solve the multimodal premature problem of logistics distribution optimisation model.First,from the perspectives of algorithm behaviour analysis and vector analysis,we design a simple and practical acceleration parameters self-adjustment strategy according to current particle fitness and population optimal fitness value.Secondly,through theoretical and numerical analyses we get the global convergence conditions of the algorithm,and provide the theoretical basis for the practical applications of the algorithm.Finally,we study the logistics distribution model in combination with the improved PSO algorithm.Experiments show that the acceleration parameters self-adaptation strategy based on individual fitness value has good balance role on two important evolution processes of PSO in "deep development"and "global exploration".The improvement way of the algorithm is simple,without increasing its time complexity,and can effectively opti-mise the logistics distribution model.%提出一种加速参数随个体适应值调整的改进粒子群（PSO）算法用来解决物流配送模型优化的多峰早熟问题。首先，从算法行为分析和向量分析的角度，根据当前粒子适应值和种群最优适应值设计一种简单实用的加速参数自调整策略。其次，通过理论和数值分析进而得到算法的全局收敛条件，为算法的实际应用提供理论基础。最后，结合改进 PSO 算法对物流配送模型进行研究。实验表明，基于个体适应值的加速参数变化策略对于 PSO 算法的“深度开发”和“全局探索”两个重要进化过程具有很好的平衡作用。算法的改进方式简单，未增加算法的时间复杂性，可以有效地对物流配送模型进行优化。
Health-Based Capitation Risk Adjustment in Minnesota Public Health Care Programs
Gifford, Gregory A.; Edwards, Kevan R.; Knutson, David J.
2004-01-01
This article documents the history and implementation of health-based capitation risk adjustment in Minnesota public health care programs, and identifies key implementation issues. Capitation payments in these programs are risk adjusted using an historical, health plan risk score, based on concurrent risk assessment. Phased implementation of capitation risk adjustment for these programs began January 1, 2000. Minnesota's experience with capitation risk adjustment suggests that: (1) implementa...
Incremental Training for SVM-Based Classification with Keyword Adjusting
SUN Jin-wen; YANG Jian-wu; LU Bin; XIAO Jian-guo
2004-01-01
This paper analyzed the theory of incremental learning of SVM (support vector machine) and pointed out it is a shortage that the support vector optimization is only considered in present research of SVM incremental learning.According to the significance of keyword in training, a new incremental training method considering keyword adjusting was proposed, which eliminates the difference between incremental learning and batch learning through the keyword adjusting.The experimental results show that the improved method outperforms the method without the keyword adjusting and achieve the same precision as the batch method.
Yi Gong; Jilin Cheng
2014-01-01
A decomposition-dynamic programming aggregation method based on experimental optimization for subsystem was proposed to solve mathematical model of optimal operation for single pumping station with adjustable blade and variable speed. Taking minimal daily electric cost as objective function and water quantity pumped by units as coordinated variable, this model was decomposed into several submodels of daily optimal operation with adjustable blade and variable speed for single pump unit which w...
Adjustable wideband reflective converter based on cut-wire metasurface
Zhang, Linbo; Zhou, Peiheng; Chen, Haiyan; Lu, Haipeng; Xie, Jianliang; Deng, Longjiang
2015-10-01
We present the design, analysis, and measurement of a broadband reflective converter using a cut-wire (CW) metasurface. Based on the characteristics of LC resonances, the proposed reflective converter can rotate a linearly polarized (LP) wave into its cross-polarized wave at three resonance frequencies, or convert the LP wave to a circularly polarized (CP) wave at two other resonance frequencies. Furthermore, the broad-band properties of the polarization conversion can be sustained when the incident wave is a CP wave. The polarization states can be adjusted easily by changing the length and width of the CW. The measured results show that a polarization conversion ratio (PCR) over 85% can be achieved from 6.16 GHz to 16.56 GHz for both LP and CP incident waves. The origin of the polarization conversion is interpreted by the theory of microwave antennas, with equivalent impedance and electromagnetic (EM) field distributions. With its simple geometry and multiple broad frequency bands, the proposed converter has potential applications in the area of selective polarization control.
时文龙; 曹臣
2014-01-01
Analyzing empirically the relationship between financial development and industrial structure adjustment based on indicator data from 1978 to 2013 and VAR model, we found that their relationship is bidirectional causality and financial development has an obvi-ous positive effect on the industrial restructuring in the long run. In order to give full play to their promoting role, the government should deepen the financial system, improve corporate governance structure, enhance the ability to withstand risks and gradually improve the overall strength. Meanwhile, the government should speed up equity structural reforms, enable businesses to mergers and acquisitions by means of hold shares and acquisitions, and increase financial support for technological transformation and product structure adjustment.%采用我国1978-2013年反映金融发展和产业结构调整的指标数据，基于VAR模型进行实证分析金融发展与产业结构调整之间的关系，研究发现我国金融发展与产业结构调整之间存在着双向的因果关系，从长期来看，金融发展对产业结构调整具有明显的正向效应。为充分发挥我国金融发展与产业结构调整的相互促进的作用，应不断深化金融制度，完善金融行业的法人治理结构，增强金融机构抵御风险的能力，逐步提升其整体实力。同时，加快企业的股权结构改革，使企业能够以参股、控股、收购等方法进行各种形式的兼并重组，对兼并重组的企业在进行技术改造与产品结构调整方面加大金融支持力度。
Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin
Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.
2011-01-01
In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.
Setting of Agricultural Insurance Premium Rate and the Adjustment Model
Huang, Ya-Lin
2012-01-01
First, using the law of large numbers, we analyze the setting principle of agricultural insurance premium rate, and take the case of setting of adult sow premium rate for study, to draw the conclusion that with the continuous promotion of agricultural insurance, increase in the types of agricultural insurance and increase in the number of the insured, the premium rate should also be adjusted opportunely. Then, on the basis of Bayes' theorem, we adjust and calibrate the claim frequency and the...
Setting of Agricultural Insurance Premium Rate and the Adjustment Model
HUANG Ya-lin
2012-01-01
First,using the law of large numbers,I analyze the setting principle of agricultural insurance premium rate,and take the case of setting of adult sow premium rate for study,to draw the conclusion that with the continuous promotion of agricultural insurance,increase in the types of agricultural insurance and increase in the number of the insured,the premium rate should also be adjusted opportunely.Then,on the basis of Bayes’ theorem,I adjust and calibrate the claim frequency and the average claim,in order to correctly adjust agricultural insurance premium rate;take the case of forest insurance for premium rate adjustment analysis.In setting and adjustment of agricultural insurance premium rate,in order to make the expected results well close to the real results,it is necessary to apply the probability estimates in a large number of risk units;focus on the establishment of agricultural risk database,to timely adjust agricultural insurance premium rate.
R.M. Solow Adjusted Model of Economic Growth
Ion Gh. Rosca
2007-05-01
The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Detailed Theoretical Model for Adjustable Gain-Clamped Semiconductor Optical Amplifier
Lin Liu
2012-01-01
Full Text Available The adjustable gain-clamped semiconductor optical amplifier (AGC-SOA uses two SOAs in a ring-cavity topology: one to amplify the signal and the other to control the gain. The device was designed to maximize the output saturated power while adjusting gain to regulate power differences between packets without loss of linearity. This type of subsystem can be used for power equalisation and linear amplification in packet-based dynamic systems such as passive optical networks (PONs. A detailed theoretical model is presented in this paper to simulate the operation of the AGC-SOA, which gives a better understanding of the underlying gain clamping mechanics. Simulations and comparisons with steady-state and dynamic gain modulation experimental performance are given which validate the model.
Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment
Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.
2016-08-01
This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.
Radar adjusted data versus modelled precipitation: a case study over Cyprus
M. Casaioli
2006-01-01
Full Text Available In the framework of the European VOLTAIRE project (Fifth Framework Programme, simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.
A finite element model updating technique for adjustment of parameters near boundaries
Gwinn, Allen Fort, Jr.
Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged
Adjustable ultraviolet sensitive detectors based on amorphous silicon
TOPIC, M; Stiebig, H.; Krause, M.; Wagner, H.
2001-01-01
Thin-film detectors made of hydrogenated amorphous silicon (LI-Si:H) and amorphous silicon carbide (a-SiC:H) with adjustable sensitivity in the ultraviolet (UV) spectrum were developed. Thin PIN diodes deposited on glass substrates in N-I-P layer sequence with a total thickness of down to 33 nm and a semitransparent Ag front contact were fabricated. The optimized diodes with a 10 nm Ag contact exhibit spectral response values above 80 mA/W in the wavelength range from 295 to 395 nm with a max...
FLC based adjustable speed drives for power quality enhancement
Sukumar Darly
2010-01-01
Full Text Available This study describes a new approach based on fuzzy algorithm to suppress the current harmonic contents in the output of an inverter. Inverter system using fuzzy controllers provide ride-through capability during voltage sags, reduces harmonics, improves power factor and high reliability, less electromagnetic interference noise, low common mode noise and extends output voltage range. A feasible test is implemented by building a model of three-phase impedance source inverter, which is designed and controlled on the basis of proposed considerations. It is verified from the practical point of view that these new approaches are more effective and acceptable to minimize the harmonic distortion and improves the quality of power. Due to the complex algorithm, their realization often calls for a compromise between cost and performance. The proposed optimizing strategies may be applied in variable-frequency dc-ac inverters, UPSs, and ac drives.
Processing Approach of Non-linear Adjustment Models in the Space of Non-linear Models
LI Chaokui; ZHU Qing; SONG Chengfang
2003-01-01
This paper investigates the mathematic features of non-linear models and discusses the processing way of non-linear factors which contributes to the non-linearity of a nonlinear model. On the basis of the error definition, this paper puts forward a new adjustment criterion, SGPE.Last, this paper investigates the solution of a non-linear regression model in the non-linear model space and makes the comparison between the estimated values in non-linear model space and those in linear model space.
Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.
2002-01-01
The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over 4
Wagner, Brandie D.; Zerbe, Gary O; Mexal, Sharon; Leonard, Sherry S.
2008-01-01
The aim of this paper is to generalize permutation methods for multiple testing adjustment of significant partial regression coefficients in a linear regression model used for microarray data. Using a permutation method outlined by Anderson and Legendre [1999] and the permutation P-value adjustment from Simon et al. [2004], the significance of disease related gene expression will be determined and adjusted after accounting for the effects of covariates, which are not restricted to be categori...
Müller, Marc F.; Thompson, Sally E.
2013-10-01
Estimating precipitation over large spatial areas remains a challenging problem for hydrologists. Sparse ground-based gauge networks do not provide a robust basis for interpolation, and the reliability of remote sensing products, although improving, is still imperfect. Current techniques to estimate precipitation rely on combining these different kinds of measurements to correct the bias in the satellite observations. We propose a novel procedure that, unlike existing techniques, (i) allows correcting the possibly confounding effects of different sources of errors in satellite estimates, (ii) explicitly accounts for the spatial heterogeneity of the biases and (iii) allows the use of non overlapping historical observations. The proposed method spatially aggregates and interpolates gauge data at the satellite grid resolution by focusing on parameters that describe the frequency and intensity of the rainfall observed at the gauges. The resulting gridded parameters can then be used to adjust the probability density function of satellite rainfall observations at each grid cell, accounting for spatial heterogeneity. Unlike alternate methods, we explicitly adjust biases on rainfall frequency in addition to its intensity. Adjusted rainfall distributions can then readily be applied as input in stochastic rainfall generators or frequency domain hydrological models. Finally, we also provide a procedure to use them to correct remotely sensed rainfall time series. We apply the method to adjust the distributions of daily rainfall observed by the TRMM satellite in Nepal, which exemplifies the challenges associated with a sparse gauge network and large biases due to complex topography. In a cross-validation analysis on daily rainfall from TRMM 3B42 v6, we find that using a small subset of the available gauges, the proposed method outperforms local rainfall estimations using the complete network of available gauges to directly interpolate local rainfall or correct TRMM by adjusting
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
2015-11-01
The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas
The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs
RAO Lan-lan; CAI Dong-han
2004-01-01
We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.
Study on Posture Adjusting System of Spacecraft Based on Stewart Mechanism
Gao, Feng; Feng, Wei; Dai, Wei-Bing; Yi, Wang-Min; Liu, Guang-Tong; Zheng, Sheng-Yu
In this paper, the design principles of adjusting parallel mechanisms is introduced, including mechanical subsystem, control sub-system and software sub-system. According to the design principles, key technologies for system of adjusting parallel mechanisms are analyzed. Finally, design specifications for system of adjusting parallel mechanisms are proposed based on requirement of spacecraft integration and it can apply to cabin docking, solar array panel docking and camera docking.
Elizur, Y; Ziv, M
2001-01-01
While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men. PMID:11444052
Demography-adjusted tests of neutrality based on genome-wide SNP data
Rafajlović, Marina
2014-08-01
Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.
Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles
Yongjun Zhang
2014-05-01
Full Text Available In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs, which are measured by the integrated positioning and orientation system (POS of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.
宋金波; 宋丹荣; 富怡雯; 戴大双
2012-01-01
The criterion of concession period under the four conditions of shortening, extending, unchanged and invalid adjustment is proposed in order to make the government and project company share risk in infrastructural BOT projects. If NPVR (net present value ratio) exceeds the upper limit, the decision model is built for shortening concession period with the target of social welfare maximum. If the NPVR is under the lower limit, the decision model is built for extending concession period with the target of project company benefit maximum. Monte Carlo simulation is applied to a case for solution. Cumulative probability of realization of expected return is calculated and contrasted under different discount rate, so it proves the effectiveness of allocation of risk by the means of concession period adjustment when single price adjustment method is unsuitable.%为了使政府和项目公司分担基础设施BOT项目的风险,提出了特许期缩短、延长、不需调整和调整失效四种情况的判别条件,在项目净现值率超过上限的情况下,以社会福利最大化为目标,构建了缩短特许期决策模型;在项目净现值率低于下限的情况下,以项目公司收益最大化为目标,构建了延长特许期决策模型.应用蒙特卡罗模拟方法对实际案例加以求解分析,并在不同的折现率水平下对实现项目预期收益的累积概率进行测算和对比,证明了在“单一”调价方法不适用的情况下通过调整特许期进行风险分担的有效性.
Azzam, Azzeddine M.; Turner, Michael S.
1991-01-01
This paper uses the Nerlovian partial adjustment model to test the hypothesis that the rate of a cooperative's adjustment to a desired financial position is partially determined by its management practices. The results indicate that management practices that are board responsibilities are not contributing to the speed of adjustment in reaching the desired financial performance, which is the responsibility of the board of directors. But management, when independently pursuing management's resp...
Cummings, E. Mark; Merrilees, Christine E.; Schermerhorn, Alice C.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
2010-01-01
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family and child psychological processes in child adjustment, supporting study of inter-relations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M= 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborh...
Rank-Defect Adjustment Model for Survey-Line Systematic Errors in Marine Survey Net
无
2002-01-01
In this paper,the structure of systematic and random errors in marine survey net are discussed in detail and the adjustment method for observations of marine survey net is studied,in which the rank-defect characteristic is discovered first up to now.On the basis of the survey-line systematic error model,the formulae of the rank-defect adjustment model are deduced according to modern adjustment theory.An example of calculations with really observed data is carried out to demonstrate the efficiency of this adjustment model.Moreover,it is proved that the semi-systematic error correction method used at present in marine gravimetry in China is a special case of the adjustment model presented in this paper.
Structural Adjustment Policy Experiments: The Use of Philippine CGE Models
Cororaton, Caesar B.
1994-01-01
This paper reviews the general structure of the following general computable general equilibrium (CGE): the APEX model, Habito’s second version of the PhilCGE model, Cororaton’s CGE model and Bautista’s first CGE model. These models are chosen as they represent the range of recently constructed CGE models of the Philippine economy. They also represent two schools of thought in CGE modeling: the well defined neoclassical, Walrasian, general equilibrium school where the market-clearing variable...
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information.
Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun
2016-01-01
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Kaichang Di
2016-08-01
Full Text Available In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy.
Fission-product cross section evaluation, integral tests and adjustment based on integral data
Recent activities made by Fission-Product Nuclear Data Working Group in JNDC were briefed. This review consists of following three parts. 1. The JENDL-2 fission product data file was recently completed (Ref. 1) which contains neutron cross sections for 100 nuclides from Kr to Tb. The evaluation was made by using the latest data of capture cross sections and resonance parameters. The optical model parameters and level density parameters were re-evaluated. The results of the previous integral tests using the data of the STEK sample reactivity and CFRMF sample activation were also reflected on the evaluations. Details are reported in Ref. (2 ∼ 4). 2. The integral test of JENDL-2 fission-product cross sections is now in progress using the EBR-II sample irradiation data and the STEK and CFRMF data. The 70 group constants were generated by MINX code with the self-shielding factor tables. The values of the normal and adjoint fluxes and their uncertainties necessary for 70 group evaluation were obtained by spline fitting interpolation using the value of Ref. (14). 3. The adjustment of evaluated cross sections based on the integral data is also in progress using the Bayesian least-square method. The data adjustment will be made especially to (1) the nuclides which only integral data are available (e.g., Xe-131, 132, 134, Pm-147, Eu-152, 154) and to (2) those which the differential and integral data are mutually inconsistent (e.g., Tc-99, Ag-109, Eu-151, 153). The cross section covariances are generated by the ''strength function model'' taking into account of the statistical model uncertainty (Ref. 5). The uncertainties of neutron spectra and adjoint spectra were also taken into account as the ''method uncertainties''. Interium results of integral test and adjustment are presented and discussed. The near-future scope of the work and the plan for JENDL-3 are briefly described. (author)
Impacts of parameters adjustment of relativistic mean field model on neutron star properties
Analysis of the parameters adjustment effects in isovector as well as in isoscalar sectors of effective field based relativistic mean field (E-RMF) model in the symmetric nuclear matter and neutron-rich matter properties has been performed. The impacts of the adjustment on slowly rotating neutron star are systematically investigated. It is found that the mass–radius relation obtained from adjusted parameter set G2** is compatible not only with neutron stars masses from 4U 0614+09 and 4U 1636-536, but also with the ones from thermal radiation measurement in RX J1856 and with the radius range of canonical neutron star of X7 in 47 Tuc, respectively. It is also found that the moment inertia of PSR J073-3039A and the strain amplitude of gravitational wave at the Earth's vicinity of PSR J0437-4715 as predicted by the E-RMF parameter sets used are in reasonable agreement with the extracted constraints of these observations from isospin diffusion data. (author)
霍艳芳; 付叶; 杨立向
2013-01-01
In view of the problems compromising the widespread adoption of the logistics liability insurance in China, such as high premium and unscientific calculation basis, we proposed on the microscopic level the plan of solution, namely introducing the NCD dynamic charging model into the adjustment of the premium of the logistics liability insurance and then in light of the characteristic of the third party logistics industry, comprehensively considering the influence of the number of claims for compensation, the amount claimed, and total coverage, rearranging the rule of transference and improving the original NCD model. At the end, we used a case analysis to demonstrate the validity of the improved NCD model.%针对中国物流责任保险费率过高、计算依据不科学,从而难以推广应用的现状,在微观层面上提出解决方案-将NCD计费动态模型引入物流责任险费率调整中来,并结合第三方物流行业特点,综合考虑索赔次数、索赔额和投保总额的影响,重新确定转移规则,对原NCD模型进行改进.最后通过案例说明改进后的NCD模型能够使计费依据更加合理,求得的保费更能切实反映第三方物流企业所处的风险水平.
A Dynamic Flexible Partial-Adjustment Model of International Diffusion of the Internet
Lee, Minkyu; Heshmati, Almas
2006-01-01
The paper introduces a dynamic, flexible partial-adjustment model and uses it to analyze the diffusion of Internet connectivity. It specifies and estimates desired levels of Internet diffusion and the speed at which countries achieve the target levels. The target levels and speed of adjustment are both country and time specific. Factors affecting Internet diffusion across countries are identified, and, using nonlinear least squares, the Gompertz growth model is generalized and estimated using...
Braun, Danielle
2013-01-01
The first part of this dissertation focuses on methods to adjust for measurement error in risk prediction models. In chapter one, we propose a nonparametric adjustment for measurement error in time to event data. Measurement error in time to event data used as a predictor will lead to inaccurate predictions. This arises in the context of self-reported family history, a time to event covariate often measured with error, used in Mendelian risk prediction models. Using validation data, we propos...
Dynamic Air-Route Adjustments - Model,Algorithm,and Sensitivity Analysis
GENG Rui; CHENG Peng; CUI Deguang
2009-01-01
Dynamic airspace management (DAM) is an important approach to extend limited air space resources by using them more efficiently and flexibly.This paper analyzes the use of the dynamic air-route adjustment (DARA) method as a core procedure in DAM systems.DARA method makes dynamic decisions on when and how to adjust the current air-route network with the minimum cost.This model differs from the air traffic flow management (ATFM) problem because it considers dynamic opening and closing of air-route segments instead of only arranging flights on a given air traffic network and it takes into account several new constraints,such as the shortest opening time constraint.The DARA problem is solved using a two-step heuristic algorithm.The sensitivities of important coefficients in the model are analyzed to determine proper values for these coefficients.The computational results based on practical data from the Beijing ATC region show that the two-step heuristic algorithm gives as good results as the CPLEX in less or equal time in most cases.
Simulation of γ spectrum-shifting based on the parameter adjustment of Gaussian function space
Based on the statistical characteristics of energy spectrum and the features of spectrum-shifting in spectrometry, the parameter adjustment method of Gaussian function space was applied in the simulation of spectrum-shifting. The transient characteristics of energy spectrum were described by the Gaussian function space, and then the Gaussian function space was transferred by parameter adjustment method. Furthermore, the spectrum-shifting in measurement of energy spectrum was simulated. The applied example shows that the parameters can be adjusted flexibly by this method to meet the various requirements in simulation of energy spectrum-shifting. This method was one parameterized simulation method with good performance for the practical application. (authors)
LQR self-adjusting based control for the planar double inverted pendulum
Zhang, Jiao-long; Zhang, Wei
Firstly, the mathematical model of planar double inverted pendulum was established by means of analytical dynamics method. Based on the linear quadratic optimal theory, LQR self-adjusting controller was presented with optimize factor. Further the output of LQR controller is refined through optimize factor which is the function of the states of planar pendulum, and on account of that, control action exerted on the pendulum is improved. Simulation results together with pilot scale experiment verify the efficacy of the suggested scheme. The results show that the controller designed is simple and real-time is good in the lab. Moreover it can ensure fast response, good stability and robustness in the different operating conditions.
a Robust Pct Method Based on Complex Least Squares Adjustment Method
Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.
2013-07-01
Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.
Experimental Tuned Mass Damper Based on Eddy Currents Damping Effect and Adjustable Stiffness
LO FEUDO, Stefania; Allani, Anissa; Cumunel, Gwendal; Argoul, Pierre; Bruno, Domenico
2015-01-01
International audience — An experimental Tuned Mass Damper (TMD) is proposed in order to damp vibrations induced by external excitations. This TMD is based on the Eddy Currents damping effect and is designed in such a way as to allow a manually adjustment of its own stiffness and inherent damping. The TMD's modal parameters estimation is therefore carried out by applying the Continuous Wavelet Transform to the signals obtained experimentally. The influence of the manual adjustment of the T...
Sofia Nikolaidou
2015-09-01
Full Text Available The presence of municipal sport organizations indicates the priority, which is given from the local authority in the well-being of citizens. On the other hand, it constitutes the basis upon which sports are built in national level. The whole body of the organizations has an organizational structure. The organizational structure is a system of registration of employment and the relations that govern them. The basic dimensions are: concentration, complexity and formalization. The purpose of this study is to confirm or contradict the proposed, based on the literature, model of organizational structure in municipal sports organizations. The Sport Commission Organization Structure Survey questionnaire was used in order to conduct it. The participants were 100 Greek municipal sport organizations. Factor analysis detected four factors: departmentalization, concentration, specialization and formalization. The results confirmed partially the proposed model. The ‘Cronbach a’ was used to calculate the reliability factors ranged from .40 to .70. The confirmatory factor analysis was used to determine the adjustment or not to the new model in the data. Based on the model of the confirmatory factor analysis it is revealed that it was slightly acceptable. Finally, although there was a marginal confirmation of the new model, it appears that questions of this survey require further improvement.
Applicability of the cross section adjustment method based on random sampling (RS) technique to burnup calculations is investigated. The cross section adjustment method is a technique for reduction of prediction uncertainties in reactor core analysis and has been widely applied to fast reactors. As a practical method, the cross section adjustment method based on RS technique is newly developed for application to light water reactors (LWRs). In this method, covariance among cross sections and neutronics parameters are statistically estimated by the RS technique and cross sections are adjusted without calculation of sensitivity coefficients of neutronics parameters, which are necessary in the conventional cross section adjustment method. Since sensitivity coefficients are not used, the RS-based method is expected to be practically applied to LWR core analysis, in which considerable computational costs are required for estimation of sensitivity coefficients. Through a simple pin-cell burnup calculation, applicability of the present method to burnup calculations is investigated. The calculation results indicate that the present method can adequately adjust cross sections including burnup characteristics. (author)
Tian Wenbo; Xu Xuan; Yang Zhengming; Xiao Qianhua; Zhang Yapu
2013-01-01
Aimed at ultra-low permeability reservoirs, the recovery effect of inverted nine-spot equilateral well pattern is studied through large-scale natural sandstone flat model experiments. Two adjustment schemes were proposed based on the original well pattern. This essay has put forward the concept of pressure sweep efficiency for evaluating the driving efficiency. Pressure gradient fields under different drawdown pressure were measured. Seepage area of the model was divided into immobilized area...
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
A model of the western Laurentide Ice Sheet, using observations of glacial isostatic adjustment
Gowan, Evan J.; Tregoning, Paul; Purcell, Anthony; Montillet, Jean-Philippe; McClusky, Simon
2016-05-01
We present the results of a new numerical model of the late glacial western Laurentide Ice Sheet, constrained by observations of glacial isostatic adjustment (GIA), including relative sea level indicators, uplift rates from permanent GPS stations, contemporary differential lake level change, and postglacial tilt of glacial lake level indicators. The later two datasets have been underutilized in previous GIA based ice sheet reconstructions. The ice sheet model, called NAICE, is constructed using simple ice physics on the basis of changing margin location and basal shear stress conditions in order to produce ice volumes required to match GIA. The model matches the majority of the observations, while maintaining a relatively realistic ice sheet geometry. Our model has a peak volume at 18,000 yr BP, with a dome located just east of Great Slave Lake with peak thickness of 4000 m, and surface elevation of 3500 m. The modelled ice volume loss between 16,000 and 14,000 yr BP amounts to about 7.5 m of sea level equivalent, which is consistent with the hypothesis that a large portion of Meltwater Pulse 1A was sourced from this part of the ice sheet. The southern part of the ice sheet was thin and had a low elevation profile. This model provides an accurate representation of ice thickness and paleo-topography, and can be used to assess present day uplift and infer past climate.
Moreno y Moreno, A. [Departamento de Apoyo en Ciencias Aplicadas, Benemerita Universidad Autonoma de Puebla, 4 Sur 104, Centro Historico, 72000 Puebla (Mexico); Moreno B, A. [Facultad de Ciencias Quimicas, UNAM, 04510 Mexico D.F. (Mexico)
2002-07-01
This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a{sub i}* exp (-1/b{sub i} * (T-C{sub i})) where: a{sub i}, b{sub i}, c{sub i} are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)
Designing a model to improve first year student adjustment to university
Nasrin Nikfal Azar; Hamideh Reshadatjoo
2014-01-01
The increase in the number of universities for the last decade in Iran increases the need for higher education institutions to manage their enrollment, more effectively. The purpose of this study is to design a model to improve the first year university student adjustment by examining the effects of academic self-efficacy, academic motivation, satisfaction, high school GPA and demographic variables on student’s adjustment to university. The study selects a sample of 357 students out of 4585 b...
An Adjusted profile likelihood for non-stationary panel data models with fixed effects
Dhaene, Geert; Jochmans, Koen
2011-01-01
We calculate the bias of the profile score for the autoregressive parameters p and covariate slopes in the linear model for N x T panel data with p lags of the dependent variable, exogenous covariates, fixed effects, and unrestricted initial observations. The bias is a vector of multivariate polynomials in p with coefficients that depend only on T. We center the profile score and, on integration, obtain an adjusted profile likelihood. When p = 1, the adjusted profile likelihood coincides wi...
Pierre Chaussé
2016-04-01
Full Text Available Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL family, specifically on the empirical likelihood (EL method, the exponential tilting (ET method, and the continuous updated estimator (CUE method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present together with the CUE-based covariate adjustment method.
A Model of Divorce Adjustment for Use in Family Service Agencies.
Faust, Ruth Griffith
1987-01-01
Presents a combined educationally and therapeutically oriented model of treatment to (1) control and lessen disruptive experiences associated with divorce; (2) enable individuals to improve their skill in coping with adjustment reactions to divorce; and (3) modify the pressures and response of single parenthood. Describes the model's four-session…
Community Influences on Adjustment in First Grade: An Examination of an Integrated Process Model
Caughy, Margaret O'Brien; Nettles, Saundra M.; O'Campo, Patricia J.
2007-01-01
We examined the impact of neighborhood characteristics both directly and indirectly as mediated by parent coaching and the parent/child affective relationship on behavioral and school adjustment in a sample of urban dwelling first graders. We used structural equations modeling to assess model fit and estimate direct, indirect, and total effects of…
Nettles, Saundra Murray; Caughy, Margaret O'Brien; O'Campo, Patricia J.
2008-01-01
Examining recent research on neighborhood influences on child development, this review focuses on social influences on school adjustment in the early elementary years. A model to guide community research and intervention is presented. The components of the model of integrated processes are neighborhoods and their effects on academic outcomes and…
Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.
Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.
2016-04-01
Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.
Jesús Crespo Cuaresma; Anna Orthofer
2010-01-01
Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...
Risk-based surveillance: Estimating the effect of unwarranted confounder adjustment
Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo
2011-01-01
We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differenc...
LC Filter Design for Wide Band Gap Device Based Adjustable Speed Drives
Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede
This paper presents a simple design procedure for LC filters used in wide band gap device based adjustable speed drives. Wide band gap devices offer fast turn-on and turn-off times, thus producing high dV/dt into the motor terminals. The high dV/dt can be harmful for the motor windings and bearings...
H. Lee
2012-01-01
Full Text Available State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA and kinematic-wave routing models of the US National Weather Service (NWS Research Distributed Hydrologic Model (RDHM with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation
Understanding property market dynamics: insights from modelling the supply-side adjustment mechanism
Nanda Nanthakumaran; Craig Watkins; Allison Orr
2000-01-01
The volatility of commercial property markets in the United Kingdomhas stimulated the development of explanatory models of 'price' determination. These models have tended to focus on the demand-side as the driver of change. A corollary of this is that, despite the fact that construction lags are known to exacerbate cyclical fluctuations, the supply-side adjustment mechanism has been subject to relatively little research effort. In this paper the authors develop a new model of commercial prope...
Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L
2016-07-01
The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed. PMID:26251100
A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
2012-01-01
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
Schmidt, P.; Lund, B; Näslund, J-O.; Fastook, J.
2014-01-01
In this study we compare a recent reconstruction of the Weichselian Ice Sheet as simulated by the University of Maine ice sheet model (UMISM) to two reconstructions commonly used in glacial isostatic adjustment (GIA) modelling: ICE-5G and ANU (Australian National University, also known as RSES). The UMISM reconstruction is carried out on a regional scale based on thermo-mechanical modelling, whereas ANU and ICE-5G are global models based on the sea level equation. The three ...
M. Omidalizarandi
2013-09-01
Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.
Rational Multi-curve Models with Counterparty-risk Valuation Adjustments
Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai;
2016-01-01
with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to......We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market data...... comply with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....
Naipal, V.; Reick, C.; Pongratz, J.; Van Oost, K.
2015-09-01
Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale. This limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE) model is, due to its simple structure and empirical basis, a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial-scale applications often rely on coarse data input, which is not compatible with the local scale on which the model is parameterized. Our study aims at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting erosion rates to extensive empirical databases from the USA and Europe. By scaling the slope according to the fractal method to adjust the topographical factor, we managed to improve the topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that compared well to high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted version shows a global higher mean erosion rate and more variability in the erosion rates. Comparison to empirical data sets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still some regional differences with the empirical databases, the results indicate that the
Entering the Chinese e-merging market : A single case study of buisness model adjustment
Byhlin, Hanna; Holm, Emma
2012-01-01
Business model is a concept that has gained increasing attention in recent years. It is seen as a firm’s ticket to success and has been studied by researchers and managers alike to find the ultimate template for prosperity. Little research has, however, been conducted on the necessary adjustments of a business model in the case of new market entry. Globalization has inspired companies to grow internationally, and firms increasingly look for new markets to capture, and China has become one of ...
Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analysis, develops spatially and temporally varying concentration and sensitivity fields that account for chemical and physical processing, and receptor analysis is used to adjust source strengths. Proposed changes to domain-wide NOx, volatile organic compounds (VOCs) and CO emissions from anthropogenic sources and for VOC emissions from biogenic sources were estimated, as well as modifications to sources based on their spatial location (urban vs. rural areas). In general, domain-wide anthropogenic VOC emissions were increased approximately two times their base case level to best match observations, domain-wide anthropogenic NOx and biogenic VOC emissions (BEIS2 estimates) remained close to their base case value and domain-wide CO emissions were decreased. Adjustments for anthropogenic NOx emissions increased their level of uncertainty when adjustments were computed for mobile and area sources (or urban and rural sources) separately, due in part to the poor spatial resolution of the observation field of nitrogen-containing species. Estimated changes to CO emissions also suffer from poor spatial resolution of the measurements. Results suggest that rural anthropogenic VOC emissions appear to be severely underpredicted. The FDDA approach was also used to investigate the speciation profiles of VOC emissions, and results warrant revision of these profiles. In general, the results obtained here are consistent with what are viewed as the current deficiencies in emissions inventories as derived by other top-down techniques, such as tunnel studies and analysis of ambient measurements. (Author)
Evolution Scenarios at the Romanian Economy Level, Using the R.M. Solow Adjusted Model
Stelian Stancu
2008-06-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.
NKG201xGIA - first results for a new model of glacial isostatic adjustment in Fennoscandia
Steffen, Holger; Barletta, Valentina; Kollo, Karin; Milne, Glenn A.; Nordman, Maaria; Olsson, Per-Anders; Simpson, Matthew J. R.; Tarasov, Lev; Ågren, Jonas
2016-04-01
Glacial isostatic adjustment (GIA) is a dominant process in northern Europe, which is observed with several geodetic and geophysical methods. The observed land uplift due to this process amounts to about 1 cm/year in the northern Gulf of Bothnia. GIA affects the establishment and maintenance of reliable geodetic and gravimetric reference networks in the Nordic countries. To support a high level of accuracy in the determination of position, adequate corrections have to be applied with dedicated models. Currently, there are efforts within a Nordic Geodetic Commission (NKG) activity towards a model of glacial isostatic adjustment for Fennoscandia. The new model, NKG201xGIA, to be developed in the near future will complement the forthcoming empirical NKG land uplift model, which will substitute the currently used empirical land uplift model NKG2005LU (Ågren & Svensson, 2007). Together, the models will be a reference for vertical and horizontal motion, gravity and geoid change and more. NKG201xGIA will also provide uncertainty estimates for each field. Following former investigations, the GIA model is based on a combination of an ice and an earth model. The selected reference ice model, GLAC, for Fennoscandia, the Barents/Kara seas and the British Isles is provided by Lev Tarasov and co-workers. Tests of different ice and earth models will be performed based on the expertise of each involved modeler. This includes studies on high resolution ice sheets, different rheologies, lateral variations in lithosphere and mantle viscosity and more. This will also be done in co-operation with scientists outside NKG who help in the development and testing of the model. References Ågren, J., Svensson, R. (2007): Postglacial Land Uplift Model and System Definition for the New Swedish Height System RH 2000. Reports in Geodesy and Geographical Information Systems Rapportserie, LMV-Rapport 4, Lantmäteriet, Gävle.
Glacial isostatic adjustment model with composite 3-D Earth rheology for Fennoscandia
van der Wal, Wouter; Barnhoorn, Auke; Stocchi, Paolo; Gradmann, Sofie; Wu, Patrick; Drury, Martyn; Vermeersen, Bert
2013-07-01
Models for glacial isostatic adjustment (GIA) can provide constraints on rheology of the mantle if past ice thickness variations are assumed to be known. The Pleistocene ice loading histories that are used to obtain such constraints are based on an a priori 1-D mantle viscosity profile that assumes a single deformation mechanism for mantle rocks. Such a simplified viscosity profile makes it hard to compare the inferred mantle rheology to inferences from seismology and laboratory experiments. It is unknown what constraints GIA observations can provide on more realistic mantle rheology with an ice history that is not based on an a priori mantle viscosity profile. This paper investigates a model for GIA with a new ice history for Fennoscandia that is constrained by palaeoclimate proxies and glacial sediments. Diffusion and dislocation creep flow law data are taken from a compilation of laboratory measurements on olivine. Upper-mantle temperature data sets down to 400 km depth are derived from surface heatflow measurements, a petrochemical model for Fennoscandia and seismic velocity anomalies. Creep parameters below 400 km are taken from an earlier study and are only varying with depth. The olivine grain size and water content (a wet state, or a dry state) are used as free parameters. The solid Earth response is computed with a global spherical 3-D finite-element model for an incompressible, self-gravitating Earth. We compare predictions to sea level data and GPS uplift rates in Fennoscandia. The objective is to see if the mantle rheology and the ice model is consistent with GIA observations. We also test if the inclusion of dislocation creep gives any improvements over predictions with diffusion creep only, and whether the laterally varying temperatures result in an improved fit compared to a widely used 1-D viscosity profile (VM2). We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments
Glacial isostatic adjustment in Fennoscandia from GRACE data and comparison with geodynamical models
Steffen, Holger; Denker, Heiner; Müller, Jürgen
2008-10-01
The Earth's gravity field observed by the Gravity Recovery and Climate Experiment (GRACE) satellite mission shows variations due to the integral effect of mass variations in the atmosphere, hydrosphere and geosphere. Several institutions, such as the GeoForschungsZentrum (GFZ) Potsdam, the University of Texas at Austin, Center for Space Research (CSR) and the Jet Propulsion Laboratory (JPL), Pasadena, provide GRACE monthly solutions, which differ slightly due to the application of different reduction models and centre-specific processing schemes. The GRACE data are used to investigate the mass variations in Fennoscandia, an area which is strongly influenced by glacial isostatic adjustment (GIA). Hence the focus is set on the computation of secular trends. Different filters (e.g. isotropic and non-isotropic filters) are discussed for the removal of high frequency noise to permit the extraction of the GIA signal. The resulting GRACE based mass variations are compared to global hydrology models (WGHM, LaDWorld) in order to (a) separate possible hydrological signals and (b) validate the hydrology models with regard to long period and secular components. In addition, a pattern matching algorithm is applied to localise the uplift centre, and finally the GRACE signal is compared with the results from a geodynamical modelling. The GRACE data clearly show temporal gravity variations in Fennoscandia. The secular variations are in good agreement with former studies and other independent data. The uplift centre is located over the Bothnian Bay, and the whole uplift area comprises the Scandinavian Peninsula and Finland. The secular variations derived from the GFZ, CSR and JPL monthly solutions differ up to 20%, which is not statistically significant, and the largest signal of about 1.2 μGal/year is obtained from the GFZ solution. Besides the GIA signal, two peaks with positive trend values of about 0.8 μGal/year exist in central eastern Europe, which are not GIA-induced, and
Adjustable grazing incidence x-ray optics based on thin PZT films
Cotroneo, Vincenzo; Davis, William N.; Marquez, Vanessa; Reid, Paul B.; Schwartz, Daniel A.; Johnson-Wilke, Raegan L.; Trolier-McKinstry, Susan E.; Wilke, Rudeger H. T.
2012-10-01
The direct deposition of piezoelectric thin films on thin substrates offers an appealing technology for the realization of lightweight adjustable mirrors capable of sub-arcsecond resolution. This solution will make it possible to realize X-ray telescopes with both large effective area and exceptional angular resolution and, in particular, it will enable the realization of the adjustable optics for the proposed mission Square Meter Arcsecond Resolution X-ray Telescope (SMART-X). In the past years we demonstrated for the first time the possibility of depositing a working piezoelectric thin film (1-5 um) made of lead-zirconate-titanate (PZT) on glass. Here we review the recent progress in film deposition and influence function characterization and comparison with finite element models. The suitability of the deposited films is analyzed and some constrains on the piezoelectric film performances are derived. The future steps in the development of the technology are described.
Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang
2008-01-01
The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.
Plasmonic-multimode-interference-based logic circuit with simple phase adjustment
Ota, Masashi; Sumimura, Asahi; Fukuhara, Masashi; Ishii, Yuya; Fukuda, Mitsuo
2016-04-01
All-optical logic circuits using surface plasmon polaritons have a potential for high-speed information processing with high-density integration beyond the diffraction limit of propagating light. However, a number of logic gates that can be cascaded is limited by complicated signal phase adjustment. In this study, we demonstrate a half-adder operation with simple phase adjustment using plasmonic multimode interference (MMI) devices, composed of dielectric stripes on a metal film, which can be fabricated by a complementary metal-oxide semiconductor (MOS)-compatible process. Also, simultaneous operations of XOR and AND gates are substantiated experimentally by combining 1 × 1 MMI based phase adjusters and 2 × 2 MMI based intensity modulators. An experimental on-off ratio of at least 4.3 dB is confirmed using scanning near-field optical microscopy. The proposed structure will contribute to high-density plasmonic circuits, fabricated by complementary MOS-compatible process or printing techniques.
郭金运; 陶华学
2003-01-01
In order to process different kinds of observing data with different precisions, a new solution model of nonlinear dynamic integral least squares adjustment was put forward, which is not dependent on their derivatives. The partial derivative of each component in the target function is not computed while iteratively solving the problem. Especially when the nonlinear target function is more complex and very difficult to solve the problem, the method can greatly reduce the computing load.
Missing Aggregate Dynamics: On the Slow Convergence of Lumpy Adjustment Models
Caballero, Ricardo J.; Eduardo M.R.A. Engel
2003-01-01
The dynamic response of aggregate variables to shocks is one of the central concerns of applied macroeconomics. The main measurement procedure for these dynamics consists of estimmiating an ARMA or VAR (VARs, for short). In non- or semi-structural approaches, the characterization of dynamics stops there. In other, more structural approaches, researcher try to uncover underlying adjustment cost parameters from the estimated VARs. Yet, in others, such as in RBC models, these estimates are used ...
B.E. Carvajal-Gámez
2012-08-01
Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.
Carvajal-Gamez
2012-09-01
Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.
Cost of capital adjusted for governance risk through a multiplicative model of expected returns
Apreda, Rodolfo
2008-01-01
This paper sets forth another contribution to the long standing debate over cost of capital, firstly by introducing a multiplicative model that translates the inner structure of the weighted average cost of capital rate and, secondly, adjusting such rate for governance risk. The conventional wisdom states that the cost of capital may be figured out by means of a weighted average of debt and capital. But this is a linear approximation only, which may bring about miscalculations, whereas the mu...
How Buyers Evaluate Product Bundles: A Model of Anchoring and Adjustment.
Yadav, Manjit S
1994-01-01
Bundling, the joint offering of two or more items, is a common selling strategy, yet little research has been conducted on buyers' evaluation of bundle offers. We developed and tested a model of bundle evaluation in which the buyers anchored their evaluation on the item perceived as most important and then made adjustments on the basis of their evaluations of the remaining bundle items. The results of two computerized laboratory experiments suggested that people tend to examine bundle items i...
Siman-Tov, Ayelet; Kaniel, Shlomo
2011-01-01
The research validates a multivariate model that predicts parental adjustment to coping successfully with an autistic child. The model comprises four elements: parental stress, parental resources, parental adjustment and the child's autism symptoms. 176 parents of children aged between 6 to 16 diagnosed with PDD answered several questionnaires…
Optimal fuzzy PID controller with adjustable factors based on flexible polyhedron search algorithm
谭冠政; 肖宏峰; 王越超
2002-01-01
A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on-line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors xp, xi, and xd are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.
Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.
Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu
2015-11-01
Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational
Optical design of the focal adjustable flashlight based on a power white-LED
Cai, Jhih-You; Lo, Yi-Chien; Sun, Ching-Cherng
2011-10-01
In the paper, we design a focal adjustable flashlight, which can provide the spotlight and the wide-angle illumination in different modes. For most users, they two request two illumination modes. In such two modes, one is high density energy of the light pattern and the other is the uniform light pattern in a wide view field. In designing the focal adjustable flashlight, we first build a precise optical model for the high-power LED produced by CREE Inc. in mid-field verification to make sure the accuracy of our simulation. Typically, the lens is useful to be the key component of the adjustable flashlight, but the optical efficiency is low. Here, we introduce a concept of so-called total internal refraction (TIR) lens into the design of flashlight. By defocusing the TIR lens, the flashlight can quickly change the beam size and energy density to various applications. We design two segments of the side of the TIR lens so that they can be applied to the two modes, and the flashlight provides a high optical efficiency for each mode. The illuminance of the center of light pattern at a distance of 2 m from the lamp is also higher than using the lens in the spotlight and wide-angle illumination. It provides good lighting functions for users.
Interfacial free energy adjustable phase field crystal model for homogeneous nucleation.
Guo, Can; Wang, Jincheng; Wang, Zhijun; Li, Junjie; Guo, Yaolin; Huang, Yunhao
2016-05-18
To describe the homogeneous nucleation process, an interfacial free energy adjustable phase-field crystal model (IPFC) was proposed by reconstructing the energy functional of the original phase field crystal (PFC) methodology. Compared with the original PFC model, the additional interface term in the IPFC model effectively can adjust the magnitude of the interfacial free energy, but does not affect the equilibrium phase diagram and the interfacial energy anisotropy. The IPFC model overcame the limitation that the interfacial free energy of the original PFC model is much less than the theoretical results. Using the IPFC model, we investigated some basic issues in homogeneous nucleation. From the viewpoint of simulation, we proceeded with an in situ observation of the process of cluster fluctuation and obtained quite similar snapshots to colloidal crystallization experiments. We also counted the size distribution of crystal-like clusters and the nucleation rate. Our simulations show that the size distribution is independent of the evolution time, and the nucleation rate remains constant after a period of relaxation, which are consistent with experimental observations. The linear relation between logarithmic nucleation rate and reciprocal driving force also conforms to the steady state nucleation theory. PMID:27117814
Highlights: • 3-group cross sections is collapsed by WIMS and SN2. Core is calculated by CITATION. • Engineering adjustments are made to generate better few group cross-sections. • Validation is made by JRR-3M measurements and Monte Carlo simulation. - Abstract: The control rods (CRs) worth is key parameter for the research reactors (RRs) operation and utilization. Control rods worth computation is a challenge for the full deterministic calculation methodology, including the few group cross section generation, and the core analysis. The purpose of this work is to interpret our codes system, and their applicability of obtaining reliable CRs worth by some engineering adjustments. Cross sections collapsing in three energy groups is made by WIMS and SN2 codes, while the core analysis is performed by CITATION. We use these codes for the design, construction, and operation of our research reactor CMRR (China Mianyang Research Reactor). However, due to the intrinsic deficiency of the diffusion theory and homogenizing approximation, the directly obtained results, such as CRs worth and neutron flux distributions are not satisfactory. So two points of simple adjustments are made to generate the few group cross-sections with the assistance of measurements and auxiliary Monte Carlo runs. The first step is to adjust the fuel cross sections by changing properly the mass of a non-fissile material, such as the mass of the two 0.4 mm Cd wires existing at both sides of each uranium plate, so that the core model of CITATION can get good eigenvalue when all CRs are completely extracted. The second step is to revise the shim absorber cross section of CRs by adjusting the hafnium mass, so that the CITATION model can get correct critical rods position. In this manuscript, the JRR-3M (Japan Research Reactor No. 3 Modified) reactor is employed as a demonstration. Final revised results are validated with the stochastic simulation and experimental measurement values, including the
Analysis of substitution experiments in ZED-2 with physically realistic model adjustments
Substitution experiments involve several types of reactor simulation. When an experiment on a power reactor is impracticable, such as a loss-of-coolant accident, a simulation of its lattice must be set up in a lattice-testing reactor, such as ZED-2. A full core of such a test lattice may not go critical, because of the size limitation, and/or may be expensive. A substitution experiment simulates such a full-core, by setting up a few channels of the experimental lattice, surrounded by a 'driving' lattice, to make a critical assembly. A corresponding 'reference' experiment, with a pure driver lattice, permits the characteristics of the experimental lattice to be inferred by comparison of the two experiments. This inference requires mathematical modelling of the experiments. Measurements of the flux distributions should enable refinement of the model. However, previous analyses have required that the model of outer parts of the reactor, such as the graphite reflector, be replaced by arbitrary extrapolation lengths, so that these can be varied to correspondingly adjust the calculated fluxes. This arbitrary replacement may lose more accuracy than the adjustment of the model gains. The FITEXPTS family of substitution experiment simulation programs permits the adjustment to consist instead of variations of the modelling of small, unknown details of the experiment, the best choice of which depends on the experiment. Examples are: the flux depression inside the support structures in the bottom ends of the channels; the effective thicknesses of the irregular graphite reflectors; the reactivity of ring of 'booster rods', which are sometimes necessary around the periphery of the driver lattice; and the extrapolation length used of necessity at the unreflected top of the core. This flexibility leads to improved accuracy. The paper expands on techniques and testing. (author). 8 refs., 5 figs
A spatial model of bird abundance as adjusted for detection probability
Gorresen, P.M.; Mcmillan, G.P.; Camp, R.J.; Pratt, T.K.
2009-01-01
Modeling the spatial distribution of animals can be complicated by spatial and temporal effects (i.e. spatial autocorrelation and trends in abundance over time) and other factors such as imperfect detection probabilities and observation-related nuisance variables. Recent advances in modeling have demonstrated various approaches that handle most of these factors but which require a degree of sampling effort (e.g. replication) not available to many field studies. We present a two-step approach that addresses these challenges to spatially model species abundance. Habitat, spatial and temporal variables were handled with a Bayesian approach which facilitated modeling hierarchically structured data. Predicted abundance was subsequently adjusted to account for imperfect detection and the area effectively sampled for each species. We provide examples of our modeling approach for two endemic Hawaiian nectarivorous honeycreepers: 'i'iwi Vestiaria coccinea and 'apapane Himatione sanguinea. ?? 2009 Ecography.
郭凯
2015-01-01
针对冰蓄冷空调的特点及工作原理，结合西安地区夏季气候特征，基于模型预测调节，提出了西安赛格购物中心夏季供冷依靠分时电价的优化方案。通过TRNSYS瞬态能耗模拟软件对赛格某层的建筑细节模拟，利用系统辨识技术将TRNSYS的数据进行处理，从而建立简化的线性热工模型，明确各因素对室内温度的影响程度并计算出室内冷量需求，最后基于线性目标规划，对赛格一天冷量的使用进行优化，达到了在节能与节省用电费用方面的明显的效果。%This paper presents the optimization scheme of cooling capacity depend on time-of-use electricity for SAGA shopping mal in Xi'an.By applying system identification techniques,a simplified linear thermal model for the building was de-rived from a detailed building simulation previously developed in TRNSYS.Then clarify the influence degree of various factors on the indoor temperature and calculate the indoor cooling power requirements.Taking advantage oflinear goal programming to optimize the whole day's cooling power for SAGA shopping mal final y.
A novel wavelength-adjusting method in InGaN-based light-emitting diodes
Deng, Zhen; Jiang, Yang; Ma, Ziguang; Wang, Wenxin; Jia, Haiqiang; Zhou, Junming; Chen, Hong
2013-01-01
The pursuit of high internal quantum efficiency (IQE) for green emission spectral regime is referred as “green gap” challenge. Now researchers place their hope on the InGaN-based materials to develop high-brightness green light-emitting diodes. However, IQE drops fast when emission wavelength of InGaN LED increases by changing growth temperature or well thickness. In this paper, a new wavelength-adjusting method is proposed and the optical properties of LED are investigated. By additional pro...
Comparison of Satellite-based Basal and Adjusted Evapotranspiration for Several California Crops
Johnson, L.; Lund, C.; Melton, F. S.
2013-12-01
There is a continuing need to develop new sources of information on agricultural crop water consumption in the arid Western U.S. Pursuant to the California Water Conservation Act of 2009, for instance, the stakeholder community has developed a set of quantitative indicators involving measurement of evapotranspiration (ET) or crop consumptive use (Calif. Dept. Water Resources, 2012). Fraction of reference ET (or, crop coefficients) can be estimated from a biophysical description of the crop canopy involving green fractional cover (Fc) and height as per the FAO-56 practice standard of Allen et al. (1998). The current study involved 19 fields in California's San Joaquin Valley and Central Coast during 2011-12, growing a variety of specialty and commodity crops: lettuce, raisin, tomato, almond, melon, winegrape, garlic, peach, orange, cotton, corn and wheat. Most crops were on surface or subsurface drip, though micro-jet, sprinkler and flood were represented as well. Fc was retrospectively estimated every 8-16 days by optical satellite data and interpolated to a daily timestep. Crop height was derived as a capped linear function of Fc using published guideline maxima. These variables were used to generate daily basal crop coefficients (Kcb) per field through most or all of each respective growth cycle by the density coefficient approach of Allen & Pereira (2009). A soil water balance model for both topsoil and root zone, based on FAO-56 and using on-site measurements of applied irrigation and precipitation, was used to develop daily soil evaporation and crop water stress coefficients (Ke, Ks). Key meteorological variables (wind speed, relative humidity) were extracted from the California Irrigation Management Information System (CIMIS) for climate correction. Basal crop ET (ETcb) was then derived from Kcb using CIMIS reference ET. Adjusted crop ET (ETc_adj) was estimated by the dual coefficient approach involving Kcb, Ke, and incorporating Ks. Cumulative ETc
基于投入产出价格影响模型的水价调整影响%Impact of water price adjustment based on input-output price model
倪红珍; 王浩; 赵博; 马伟
2013-01-01
Using the input-output price model, Beijing as an example, calculated the effect of water price's single and integrated fluctuation on price of goods or services and water fee rate in other economic sectors. Which can helps to make effective price policies to relieve stress of water supply. The results show, assuming all kinds of water price independent, unaffected by other departments, the price impact is weak caused by water price increasing; it is more obvious impact on resident, administrative and public accounting, and partial high duty of water services from various water price changes, the impact on education is the biggest. The most obvious influence to water fee rate comes from the industry and commerce water price fluctuation. The results also show, water fee rate of recycled water industry increase more obviously than other sectors when sewage treatment industry price increasing. If all kinds of water price doubled, the proportion which is the total water fee to water consumption expenditure is still less than 0.5% for non-watersupply sectors, except for resident which proportion is slightly higher than 1%. The results tell us, if the water consumption of each department keep unchanged, the water price at least might has 3 times rising space, as well as the water fee rate change to 2% which is minimum standard bearing capacity of the users; and water price increasing will not produce large impact in economic society. Vigorously for water saving, the water price reform is necessary and exigent measures.%运用国民经济投入产出的价格影响模型,以北京市为例,分析水价单独变化与联动变化对其它经济部门产品或服务价格和水费率的影响,为制定有效缓解水资源供求矛盾的水价政策提供定量分析依据.计算结果显示:假设各类供水部门价格相互独立、不受其它部门价格的影响,各类水价提高对非供水部门的影响较弱,主要对居民、行政事业及部分高用水服
Cotroneo, Vincenzo; Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.; Trolier-McKinstry, Susan; Wilke, Rudeger H. T.
2011-09-01
The present generation of X-ray telescopes emphasizes either high image quality (e.g. Chandra with sub-arc second resolution) or large effective area (e.g. XMM-Newton), while future observatories under consideration (e.g. Athena, AXSIO) aim to greatly enhance the effective area, while maintaining moderate (~10 arc-seconds) image quality. To go beyond the limits of present and planned missions, the use of thin adjustable optics for the control of low-order figure error is needed to obtain the high image quality of precisely figured mirrors along with the large effective area of thin mirrors. The adjustable mirror prototypes under study at Smithsonian Astrophysical Observatory are based on two different principles and designs: 1) thin film lead-zirconate-titanate (PZT) piezoelectric actuators directly deposited on the mirror back surface, with the strain direction parallel to the glass surface (for sub-arc-second angular resolution and large effective area), and 2) conventional leadmagnesium- niobate (PMN) electrostrictive actuators with their strain direction perpendicular to the mirror surface (for 3-5 arc second resolution and moderate effective area). We have built and operated flat test mirrors of these adjustable optics. We present the comparison between theoretical influence functions as obtained by finite element analysis and the measured influence functions obtained from the two test configurations.
Červenka, Daniel
2010-01-01
The aim of this thesis is to find the appropriate manner for inventory control of a small e-shop. The greatest emphasis is placed on the nonseasonal goods. Any model which respect all needs of the shop was not found. From a series of models the stochastic model with continuous demand was chosen as the most applicable. Adjustment of the cost function, change the delivery time from constant to fluid, determination of optimal inventory level and other modifications brought the model more in real...
Hirozawa, Anne M; Montez-Rath, Maria E; Johnson, Elizabeth C; Solnit, Stephen A; Drennan, Michael J; Katz, Mitchell H; Marx, Rani
2016-01-01
We compared prospective risk adjustment models for adjusting patient panels at the San Francisco Department of Public Health. We used 4 statistical models (linear regression, two-part model, zero-inflated Poisson, and zero-inflated negative binomial) and 4 subsets of predictor variables (age/gender categories, chronic diagnoses, homelessness, and a loss to follow-up indicator) to predict primary care visit frequency. Predicted visit frequency was then used to calculate patient weights and adjusted panel sizes. The two-part model using all predictor variables performed best (R = 0.20). This model, designed specifically for safety net patients, may prove useful for panel adjustment in other public health settings. PMID:27576054
Tian Wenbo
2013-05-01
Full Text Available Aimed at ultra-low permeability reservoirs, the recovery effect of inverted nine-spot equilateral well pattern is studied through large-scale natural sandstone flat model experiments. Two adjustment schemes were proposed based on the original well pattern. This essay has put forward the concept of pressure sweep efficiency for evaluating the driving efficiency. Pressure gradient fields under different drawdown pressure were measured. Seepage area of the model was divided into immobilized area, nonlinear seepage area and quasi-linear seepage area combining with the nonlinear seepage experiment of the small twin core. The results showed that the ultra-low permeability sandstone flat model was characterized as nonlinear seepage law and threshold pressure gradient obviously. For one quarter of the inverted nine-spot equilateral well pattern, the middle region is difficult to develop. The recovery effect can be improved by adjusting production wells or adding injection wells. And the best solution is transforming the corner production well into injection well.
Cheng, Yongzhi; Nie, Yan; Wang, Xian; Gong, Rongzhou
2014-02-01
In this paper, the magnetic rubber plate absorber (MRPA) and metamaterial absorber (MA) based on MRP substrate were proposed and studied numerically and experimentally. Based on the characteristic of L-C resonances, experimental results show that the MA composed of cross resonator (CR) embedded single layer MRP could be adjustable easily by changing the wire length and width of CR structure and MRP thickness. Finally, experimental results show that the MA composed of CR-embedded two layers MRP with the total thickness of 2.42 mm exhibit a -10 dB absorption bandwidth from 1.65 GHz to 3.7 GHz, which is 1.86 times wider than the same thickness MRPA.
New Strategy for Congestion Control based on Dynamic Adjustment of Congestion Window
Gamal Attiya
2012-03-01
Full Text Available This paper presents a new mechanism for the end-to-end congestion control, called EnewReno. The proposed mechanism is based on the enhancement of both the congestion avoidance and the fast recovery algorithms of the TCP NewReno so as to improve its performance. The basic idea of the proposed mechanism is to adjust the congestion window of the TCP sender dynamically based on the level of congestion in the network so as to allow transferring more packets to the destination. The performance of the proposed mechanism is evaluated and compared with the most recent mechanisms by simulation studies using the well known Network Simulator NS-2 and the realistic topology generator GT-ITM.
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Doo Yong Choi
2016-04-01
Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.
Improvement for Speech Signal based on Post Wiener Filter and Adjustable Beam-Former
Xiaorong Tong
2013-06-01
Full Text Available In this study, a two-stage filter structure is introduced for speech enhancement. The first stage is an adjustable filter and sum beam-former with four-microphone array. The control of beam-forming filter is realized by adjusting only a single control variable. Different from the adaptive beam-forming filter, the proposed filter structure does not bring to any adaptive error noise, thus, it also does not bring the trouble to the second stage of the speech signal processing. The second stage of the proposed filter is a Wiener filter. The estimation of signal’s power spectrum for Wiener filter is realized by cross-correlation between primary outputs of two adjacent directional beams. This estimation is based on the assumption that the noise outputs of the two adjacent directional beams come from two independent noise source but the speech outputs come from the same speech source. The simulation results shown that the proposed algorithm can improve the Signal-Noise-Ratio (SNR about 6 dB.
Park, Chul-Soon; Shrestha, Vivek Raj; Lee, Sang-Shin; Choi, Duk-Yong
2016-01-01
Trans-reflective color filters, which take advantage of a phase compensated etalon (silver-titania-silver-titania) based nano-resonator, have been demonstrated to feature a variable spectral bandwidth at a constant resonant wavelength. Such adjustment of the bandwidth is presumed to translate into flexible control of the color saturation for the transmissive and reflective output colors produced by the filters. The thickness of the metallic mirror is primarily altered to tailor the bandwidth, which however entails a phase shift associated with the etalon. As a result, the resonant wavelength is inevitably displaced. In order to mitigate this issue, we attempted to compensate for the induced phase shift by introducing a dielectric functional layer on top of the etalon. The phase compensation mediated by the functional layer was meticulously investigated in terms of the thickness of the metallic mirror, from the perspective of the resonance condition. The proposed color filters were capable of providing additive colors of blue, green, and red for the transmission mode while exhibiting subtractive colors of yellow, magenta, and cyan for the reflection mode. The corresponding color saturation was estimated to be efficiently adjusted both in transmission and reflection. PMID:27150979
Park, Chul-Soon; Shrestha, Vivek Raj; Lee, Sang-Shin; Choi, Duk-Yong
2016-05-01
Trans-reflective color filters, which take advantage of a phase compensated etalon (silver-titania-silver-titania) based nano-resonator, have been demonstrated to feature a variable spectral bandwidth at a constant resonant wavelength. Such adjustment of the bandwidth is presumed to translate into flexible control of the color saturation for the transmissive and reflective output colors produced by the filters. The thickness of the metallic mirror is primarily altered to tailor the bandwidth, which however entails a phase shift associated with the etalon. As a result, the resonant wavelength is inevitably displaced. In order to mitigate this issue, we attempted to compensate for the induced phase shift by introducing a dielectric functional layer on top of the etalon. The phase compensation mediated by the functional layer was meticulously investigated in terms of the thickness of the metallic mirror, from the perspective of the resonance condition. The proposed color filters were capable of providing additive colors of blue, green, and red for the transmission mode while exhibiting subtractive colors of yellow, magenta, and cyan for the reflection mode. The corresponding color saturation was estimated to be efficiently adjusted both in transmission and reflection.
Measurement of the Economic Growth and Add-on of the R.M. Solow Adjusted Model
Ion Gh. Rosca
2007-08-01
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.
Rossetti, Fernanda F; Schneck, Emanuel; Fragneto, Giovanna; Konovalov, Oleg V; Tanaka, Motomu
2015-04-21
To understand the generic role of soft, hydrated biopolymers in adjusting interfacial interactions at biological interfaces, we designed a defined model of the cell-extracellular matrix contacts based on planar lipid membranes deposited on polymer supports (polymer-supported membranes). Highly uniform polymer supports made out of regenerated cellulose allow for the control of film thickness without changing the surface roughness and without osmotic dehydration. The complementary combination of specular neutron reflectivity and high-energy specular X-ray reflectivity yields the equilibrium membrane-substrate distances, which can quantitatively be modeled by computing the interplay of van der Waals interaction, hydration repulsion, and repulsion caused by the thermal undulation of membranes. The obtained results help to understand the role of a biopolymer in the interfacial interactions of cell membranes from a physical point of view and also open a large potential to generally bridge soft, biological matter and hard inorganic materials. PMID:25794040
Performance analysis of adjustable window based FIR filter for noisy ECG Signal Filtering
N. Mahawar
2013-09-01
Full Text Available Recording of the electrical activity associated to heart functioning is known as Electrocardiogram (ECG. ECG is a quasi-periodical, rhythmically signal synchronized by the function of the heart, which acts as a generator of bioelectric events. ECG signals are low level signals and sensitive to external contaminations. Electrocardiogram signals are often corrupted by noise which may have electrical or electrophysiological origin. The noise signal tends to alter the signal morphology, thereby hindering the correct diagnosis. In order to remove the unwanted noise, a digital filtering technique based on adjustable windows is proposed in this paper. Finite Impulse Response (FIR low pass is designed using windowing method for the ECG signal. The results obtained from different techniques are compared on the basis of popularly used signal error measures like SNR, PRD, PRD1, and MSE.
Development of the adjusted nuclear cross-section library based on JENDL-3.2 for large FBR
JNC (and PNC) had developed the adjusted nuclear cross-section library in which the results of the JUPITER experiments were reflected. Using this adjusted library, the distinct improvement of the accuracy in nuclear design of FBR cores had been achieved. As a recent research, JNC develops a database of other integral data in addition to the JUPITER experiments, aiming at further improvement for accuracy and reliability. In 1991, the adjusted library based on JENDL-2, JFS-3-J2 (ADJ91R), was developed, and it has been used on the design research for FBR. As an evaluated nuclear library, however, JENDL-3.2 is recently used. Therefore, the authors developed an adjusted library based on JENDL-3.2 which is called JFS-3-J3.2(ADJ98). It is known that the adjusted library based on JENDL-2 overestimated the sodium void reactivity worth by 10-20%. It is expected that the adjusted library based on JENDL-3.2 solve the problem. The adjusted library JFS-3-J3.2(ADJ98) was produced with the same method as the adjusted library JFS-3-J2(ADJ91R) and used more integral parameters of JUPITER experiments than the adjusted library JFS-3-J2(ADJ91R). This report also describes the design accuracy estimation on a 600 MWe class FBR with the adjusted library JFS-3-J3.2(ADJ98). Its main nuclear design parameters (multiplication factor, burn-up reactivity loss, breeding ratio, etc.) except the sodium void reactivity worth which are calculated with the adjusted library JFS-3-J3.2(ADJ98) are almost the same as those predicted with JFS-3-J2(ADJ91R). As for the sodium void reactivity, the adjusted library JFS-3-J3.2(ADJ98) estimates about 4% smaller than the JFS-3-J2(ADJ91R) because of the change of the basic nuclear library from JENDL-2 to JENDL-3.2. (author)
A novel micro-accelerometer with adjustable sensitivity based on resonant tunnelling diodes
Resonant tunnelling diodes (RTDs) have negative differential resistance effect, and the current-voltage characteristics change as a function of external stress, which is regarded as meso-piezoresistance effect of RTDs. In this paper, a novel micro-accelerometer based on AlAs/GaAs/In0.1Ga0.9As/GaAs/AlAs RTDs is designed and fabricated to be a four-beam-mass structure, and an RTD-Wheatstone bridge measurement system is established to test the basic properties of this novel accelerometer. According to the experimental results, the sensitivity of the RTD based micro-accelerometer is adjustable within a range of 3 orders when the bias voltage of the sensor changes. The largest sensitivity of this RTD based micro-accelerometer is 560.2025 mV/g which is about 10 times larger than that of silicon based micro piezoresistive accelerometer, while the smallest one is 1.49135 mV/g. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
A novel micro-accelerometer with adjustable sensitivity based on resonant tunnelling diodes
Xiong Ji-Jun; Mao Hai-Yang; Zhang Wen-Dong; Wang Kai-Qun
2009-01-01
Resonant tunnelling diodes (RTDs) have negative differential resistance effect, and the current-voltage charac-teristics change as a function of external stress, which is regarded as meso-piezoresistance effect of RTDs. In this paper, a novel micro-accelerometer based on AlAs/GaAs/Ino.1Gao.9As/GaAs/AlAs RTDs is designed and fabricated to be a four-beam-mass structure, and an RTD-Wheatstone bridge measurement system is established to test the ba-sic properties of this novel accelerometer. According to the experimental results, the sensitivity of the RTD based micro-accelerometer is adjustable within a range of 3 orders when the bias voltage of the sensor changes. The largest sensitivity of this RTD based micro-accelerometer is 560.2025 mV/g which is about 10 times larger than that of silicon based micro piezoresistive accelerometer, while the smallest one is 1.49135 mV/g.
Voltage adjusting characteristics in terahertz transmission through Fabry-Pérot-based metamaterials
Jun Luo
2015-10-01
Full Text Available Metallic electric split-ring resonators (SRRs with featured size in micrometer scale, which are connected by thin metal wires, are patterned to form a periodically distributed planar array. The arrayed metallic SRRs are fabricated on an n-doped gallium arsenide (n-GaAs layer grown directly over a semi-insulating gallium arsenide (SI-GaAs wafer. The patterned metal microstructures and n-GaAs layer construct a Schottky diode, which can support an external voltage applied to modify the device properties. The developed architectures present typical functional metamaterial characters, and thus is proposed to reveal voltage adjusting characteristics in the transmission of terahertz waves at normal incidence. We also demonstrate the terahertz transmission characteristics of the voltage controlled Fabry-Pérot-based metamaterial device, which is composed of arrayed metallic SRRs. To date, many metamaterials developed in earlier works have been used to regulate the transmission amplitude or phase at specific frequencies in terahertz wavelength range, which are mainly dominated by the inductance-capacitance (LC resonance mechanism. However, in our work, the external voltage controlled metamaterial device is developed, and the extraordinary transmission regulation characteristics based on both the Fabry-Pérot (FP resonance and relatively weak surface plasmon polariton (SPP resonance in 0.025-1.5 THz range, are presented. Our research therefore shows a potential application of the dual-mode-resonance-based metamaterial for improving terahertz transmission regulation.
Iterative Dense Correspondence Correction Through Bundle Adjustment Feedback-Based Error Detection
Hess-Flores, M A; Duchaineau, M A; Goldman, M J; Joy, K I
2009-11-23
A novel method to detect and correct inaccuracies in a set of unconstrained dense correspondences between two images is presented. Starting with a robust, general-purpose dense correspondence algorithm, an initial pose estimate and dense 3D scene reconstruction are obtained and bundle-adjusted. Reprojection errors are then computed for each correspondence pair, which is used as a metric to distinguish high and low-error correspondences. An affine neighborhood-based coarse-to-fine iterative search algorithm is then applied only on the high-error correspondences to correct their positions. Such an error detection and correction mechanism is novel for unconstrained dense correspondences, for example not obtained through epipolar geometry-based guided matching. Results indicate that correspondences in regions with issues such as occlusions, repetitive patterns and moving objects can be identified and corrected, such that a more accurate set of dense correspondences results from the feedback-based process, as proven by more accurate pose and structure estimates.
Uncertainty study of the PWR pressure vessel fluence. Adjustment of the nuclear data base
The code system devoted to the calculation of the sensitivity and uncertainty of of the neutron flux and reaction rates calculated by the transport codes, has been developed. Adjustment of the basic data to experimental results can be performed as well. Various sources of uncertainties can be taken into account, such as those due to the uncertainties in the cross-sections, response functions, fission spectrum and space distribution of neutron source, geometry and material composition uncertainties... One -As well as two- dimensional analysis can be performed. Linear perturbation theory is applied. The code system is sufficiently general to be used for various analysis in the fields of fission and fusion. The principal objective of our studies concerns the capsule dosimetry study realized in the framework of the 900 MWe PWR pressure vessel surveillance program. The analysis indicates that the present calculations, performed by the code TRIPOLI-2, using the ENDF/B-IV based, non-perturbed neutron cross-section library in 315 energy groups, allows to estimate the neutron flux and the reaction rates in the surveillance capsules and in the most calculated and measured reaction rates permits to reduce these uncertainties. The results obtained with the adjusted iron cross-sections, response functions and fission spectrum show that the agreement between the calculation and the experiment was improved to become within 10% approximately. The neutron flux deduced from the experiment is then extrapolated from the capsule to the most exposed pressure vessel location using the calculated lead factor. The uncertainty in this factor was estimated to be about 7%. (author). 39 refs., 52 figs., 30 tabs
Roland Gerhards
2013-05-01
Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.
Liu, Shengqiang; Li, Jie; Du, Chunlei; Yu, Junsheng
2015-07-01
A color tuning index (ICT) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between ICT and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with ICT values, indicating that the ICT parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N'-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest ICT value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting ICT formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the ICT via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with ICT parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning.
A color tuning index (ICT) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between ICT and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with ICT values, indicating that the ICT parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N′-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest ICT value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting ICT formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the ICT via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with ICT parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning
Diggle, Peter J
2007-01-01
Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.
王丽珍; 李秀芳
2012-01-01
Ever since 2007, capital and stock increases trend m insurance compamc~ o~ ,~ , the required capital size are expanding, and also the frequency of capital increasing is improving. Although the ap- pearance of this phenomenon is inevitable, owing to the costliness and scarcity of capital, insurance industry and China Insurance Regulatory Commission all have paid more attention to the upsurge about capital. Furthermore, if the insurance companies can＇ t raise capital timely, they will not operate normally, and then it is bound to endanger social stability and the interest of insurance consumers. So it is meaningful to study the capital under the solvency regulation supervision and the special development stage. The capital is applicable to withstand unexpected loss, in this sense, the adjustment of capital is corresponding with risk level. Thus, we combine capital with risk to re- search the development of china＇ s property-liability companies. According to the research paradigm of banking studies about capital structure and portfolio risk, we employ partial adjustment model to the insurance area. Through the panel data of 34 Property-Liability insurance companies, this paper conduct the Three-stage least square （ 3 SLS） procedure to estimate a simultaneous equations model and examines the impact of capital determina- tion and portfolio risk under solvency regulation. In addition to this, we also consider the other factors, such as the structure of lines, the scale of asset, the reinsurance ratio, the return of asset under the research framework. Based on this, robustness tests with two broad heading including five types subsamples also present consistent results. Our key findings include four aspects. First of all, we find that capitalized insurers increase capital faster than under- capitalized insurers, which is different from the condition of American. This result implies that, on account of the rapid development of insurance industry currently, insurance
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Punamäki, R L; Qouta, S; el Sarraj, E
1997-08-01
The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting. PMID:9306648
Real time adjustment of slow changing flow components in distributed urban runoff models
Borup, Morten; Grum, M.; Mikkelsen, Peter Steen
2011-01-01
model states governing the infiltrating inflow based on downstream flow measurements. The fact that the infiltration processes follows a relative large time scale is used to estimate the part of the model residuals, at a gauged downstream location, that can be attributed to infiltration processes. This...... improvements for regular simulations as well as up to 10 hour forecasts. The updating method reduces the impact of non-representative precipitation estimates as well as model structural errors and leads to better overall modelling results....
A new glacial isostatic adjustment model of the Innuitian Ice Sheet, Arctic Canada
Simon, K. M.; James, T. S.; Dyke, A. S.
2015-07-01
A reconstruction of the Innuitian Ice Sheet (IIS) is developed that incorporates first-order constraints on its spatial extent and history as suggested by regional glacial geology studies. Glacial isostatic adjustment modelling of this ice sheet provides relative sea-level predictions that are in good agreement with measurements of post-glacial sea-level change at 18 locations. The results indicate peak thicknesses of the Innuitian Ice Sheet of approximately 1600 m, up to 400 m thicker than the minimum peak thicknesses estimated from glacial geology studies, but between approximately 1000 to 1500 m thinner than the peak thicknesses present in previous GIA models. The thickness history of the best-fit Innuitian Ice Sheet model developed here, termed SJD15, differs from the ICE-5G reconstruction and provides an improved fit to sea-level measurements from the lowland sector of the ice sheet. Both models provide a similar fit to relative sea-level measurements from the alpine sector. The vertical crustal motion predictions of the best-fit IIS model are in general agreement with limited GPS observations, after correction for a significant elastic crustal response to present-day ice mass change. The new model provides approximately 2.7 m equivalent contribution to global sea-level rise, an increase of +0.6 m compared to the Innuitian portion of ICE-5G. SJD15 is qualitatively more similar to the recent ICE-6G ice sheet reconstruction, which appears to also include more spatially extensive ice cover in the Innuitian region than ICE-5G.
ADJUSTMENT FACTORS AND ADJUSTMENT STRUCTURE
Tao Benzao
2003-01-01
In this paper, adjustment factors J and R put forward by professor Zhou Jiangwen are introduced and the nature of the adjustment factors and their role in evaluating adjustment structure is discussed and proved.
Gabriela Prelipcean
2014-02-01
Full Text Available The recent crisis and turbulences have significantly changed the consumers’ behavior, especially through its access possibility and satisfaction, but also the new dynamic flexible adjustment of the supply of goods and services. The access possibility and consumer satisfaction should be analyzed in a broader context of corporate responsibility, including financial institutions. This contribution gives an answer to the current situation in Romania as an emerging country, strongly affected by the global crisis. Empowering producers and harmonize their interests with the interests of consumers really require a significant revision of the quantitative models used to study long-term consumption-saving behavior, with a new model, adapted to the current conditions in Romania in the post-crisis context. Based on the general idea of the model developed by Hai, Krueger, Postlewaite (2013 we propose a new way of exploiting the results considering the dynamics of innovative adaptation based on Brownian motion, but also the integration of the cyclicality concept, the stochastic shocks analyzed by Lèvy and extensive interaction with capital markets characterized by higher returns and volatility.
Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment
Zhao, Hongjian; Xia, Shixiong; Yao, Rui; Niu, Qiang; Zhou, Yong
2015-11-01
Concatenating multicamera videos with differing centers of projection into a single panoramic video is a critical technology of many important applications. We propose a real-time video fusion approach to create wide field-of-view video. To provide a fast and accurate video registration method, we propose multistage hashing to find matched feature-point pairs from coarse to fine. In the first stage of multistage hashing, a short compact binary code is learned from all feature points, and then we calculate the Hamming distance between each two points to find the candidate-matched points. In the second stage, a long binary code is obtained by remapping the candidate points for fine matching. To tackle the distortion and scene depth variation of multiview frames in videos, we build hybrid transformation with depth adjustment. The depth compensation between two adjacent frames extends into multiple frames in an iterative model for successive video frames. We conduct several experiments with different dynamic scenes and camera numbers to verify the performance of the proposed real-time video fusion approach.
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Based on Motivation and Adjust Strategy%立足动机,调整策略
杨洪
2012-01-01
旅游经营者和游客之间存在着旅游供求关系,因此,旅游经营者必须要做到能够很好地了解旅游者的行为动机,才能调整经营策略来满足旅游者的需求。本文在尽可能透彻地了解旅游者新旧两种动机的基础上,有的放矢对旅游经营者经营策略进行了研究。%There exists a relationship of supply and demand between operators and the tourists;therefore,tour operators should be able to have a good understanding of tourist behavior and motivation so that they can adjust business strategy to meeting the needs of tourists.In this paper,based on a thorough understanding of old and new motivations of the tourists,the author has a detailed research in the operating strategy on tourism operators.
Thermo-adjustable mechanical properties of polymer, lipid-based complex fluids
Firestone, Millicent; Lee, Sungwon
2012-02-01
Combined rheology (oscillatory and steady shear) and SAXS studies reveal details on the temperature-dependent, reversible mechanical properties of nonionic polymer, lipid-based complex fluids. Compositions prepared by introduction of the polymer as either a lipid conjugate or a triblock copolymer form an elastic gel as the temperature is increased to 18 C. The network is produced from PEO chain entanglement and physical crosslinks confined within the intervening aqueous layers of a multilamellar structured lyotropic mesophase. Although the complex fluids are weak gels, tuning of the gel strength can be achieved by temperature adjustment. The sol state formed at reduced temperature arises as a consequence of the well-solvated PEO chains adopting a non-interacting, conformational state. Complex fluids prepared with the triblock copolymers exhibit greater tunability in viscoelasticity than those containing the PEGylated-lipid conjugate because of the temperature-dependent water solubility of the central PPO block. The water solubility of PPO at reduced temperatures results in the polymer being expelled from the self-assembled amphiphilic bilayer, causing collapse of the swollen lamellar structure and loss of the PEO network. At elevated temperatures, the triblock reinserts into the bilayer producing an elastic gel. These studies identify macromolecular architectures for the facile preparation of dynamic soft materials possessing a range of mechanical properties that can be tuned by modest temperature control.
Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S
2006-05-01
This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process. PMID:17296040
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
2003-01-01
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
Hu, Yun-peng; Li, Ke-bo; Xu, Wei; Chen, Lei; Huang, Jian-yu
2016-08-01
Space-based visible (SBV) program has been proved to be with a large advantage to observe geosynchronous earth orbit (GEO) objects. With the development of SBV observation started from 1996, many strategies have come out for the purpose of observing GEO objects more efficiently. However it is a big challenge to visit all the GEO objects in a relatively short time because of the distribution characteristics of GEO belt and limited field of view (FOV) of sensor. And it's also difficult to keep a high coverage of the GEO belt every day in a whole year. In this paper, a space-based observation strategy for GEO objects is designed based on the characteristics of the GEO belt. The mathematical formula of GEO belt is deduced and the evolvement of GEO objects is illustrated. There are basically two kinds of orientation strategies for most observation satellites, i.e., earth-oriented and inertia-directional. Influences of both strategies to their own observation regions are analyzed and compared with each other. A passive optical instrument with daily attitude-adjusting strategies is proposed to increase the daily coverage rate of GEO objects in a whole year. Furthermore, in order to observe more GEO objects in a relatively short time, the strategy of a satellite with multi-sensors is proposed. The installation parameters between different sensors are optimized, more than 98% of GEO satellites can be observed every day and almost all the GEO satellites can be observed every two days with 3 sensors (FOV: 6° × 6°) on the satellite under the strategy of daily pointing adjustment in a whole year.
Belehaki Anna
2012-12-01
Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.
Dynamic capacity adjustment for virtual-path based networks using neuro-dynamic programming
Şahin, Cem
2003-01-01
Cataloged from PDF version of article. Dynamic capacity adjustment is the process of updating the capacity reservation of a virtual path via signalling in the network. There are two important issues to be considered: bandwidth (resource) utilization and signaling traffic. Changing the capacity too frequently will lead to efficient usage of resources but has a disadvantage of increasing signaling traffic among the network elements. On the other hand, if the capacity is adjust...
An assessment of the ICE6G_C(VM5a) glacial isostatic adjustment model
Purcell, A.; Tregoning, P.; Dehecq, A.
2016-05-01
The recent release of the next-generation global ice history model, ICE6G_C(VM5a), is likely to be of interest to a wide range of disciplines including oceanography (sea level studies), space gravity (mass balance studies), glaciology, and, of course, geodynamics (Earth rheology studies). In this paper we make an assessment of some aspects of the ICE6G_C(VM5a) model and show that the published present-day radial uplift rates are too high along the eastern side of the Antarctic Peninsula (by ˜8.6 mm/yr) and beneath the Ross Ice Shelf (by ˜5 mm/yr). Furthermore, the published spherical harmonic coefficients—which are meant to represent the dimensionless present-day changes due to glacial isostatic adjustment (GIA)—contain excessive power for degree ≥90, do not agree with physical expectations and do not represent accurately the ICE6G_C(VM5a) model. We show that the excessive power in the high-degree terms produces erroneous uplift rates when the empirical relationship of Purcell et al. (2011) is applied, but when correct Stokes coefficients are used, the empirical relationship produces excellent agreement with the fully rigorous computation of the radial velocity field, subject to the caveats first noted by Purcell et al. (2011). Using the Australian National University (ANU) groups CALSEA software package, we recompute the present-day GIA signal for the ice thickness history and Earth rheology used by Peltier et al. (2015) and provide dimensionless Stokes coefficients that can be used to correct satellite altimetry observations for GIA over oceans and by the space gravity community to separate GIA and present-day mass balance change signals. We denote the new data sets as ICE6G_ANU.
吕明元; 李彦超; 宫璐一
2014-01-01
Since reform and opening,the economy of Tianjin has been developing rapidly. Meanwhile,it is an inefficient model of economic growth. In July 201 0,Tianjin was designated one of the low carbon cities in China. Therefore,Tianjin must adjust the industrial structure and put the strategy of low carbon transition into practice. Firstly,this article estab-lished a multi-objective linear programming model and determined the parameters of the economic development objectives and the carbon emissions target. By solving the model,it predicted the goal of Tianjin carbon emissions of the thrice indus-trial could be achieved. Finally,according to the results of the model,it advised how to adjust the industrial structure of Tianjin and enact policy.%建立多目标产业线性规划模型，并确定经济发展目标参数、碳排放量目标参数。通过对模型求解，预测2015年天津市三次产业的碳排放量，预计可以实现天津节能减排的目标。根据线性规划模型的结果，提出天津市产业结构优化和政策支持方向的对策性建议。
Assessment of an adjustment factor to model radar range dependent error
Sebastianelli, S.; Russo, F.; Napolitano, F.; Baldini, L.
2012-09-01
Quantitative radar precipitation estimates are affected by errors determined by many causes such as radar miscalibration, range degradation, attenuation, ground clutter, variability of Z-R relation, variability of drop size distribution, vertical air motion, anomalous propagation and beam-blocking. Range degradation (including beam broadening and sampling of precipitation at an increasing altitude)and signal attenuation, determine a range dependent behavior of error. The aim of this work is to model the range-dependent error through an adjustment factor derived from the G/R ratio trend against the range, where G and R are the corresponding rain gauge and radar rainfall amounts computed at each rain gauge location. Since range degradation and signal attenuation effects are negligible close to the radar, resultsshowthatwithin 40 km from radar the overall range error is independent of the distance from Polar 55C and no range-correction is needed. Nevertheless, up to this distance, the G/R ratiocan showa concave trend with the range, which is due to the melting layer interception by the radar beam during stratiform events.
Agliari, Anna [Dipartimento di Scienze Economiche e Sociali, Universita Cattolica del Sacro Cuore, Via Emilia Parmense, 84, 29100 Piacenza (Italy)]. E-mail: anna.agliari@unicatt.it
2006-08-15
In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed.
In this paper we study some global bifurcations arising in the Puu's oligopoly model when we assume that the producers do not adjust to the best reply but use an adaptive process to obtain at each step the new production. Such bifurcations cause the appearance of a pair of closed invariant curves, one attracting and one repelling, this latter being involved in the subcritical Neimark bifurcation of the Cournot equilibrium point. The aim of the paper is to highlight the relationship between the global bifurcations causing the appearance/disappearance of two invariant closed curves and the homoclinic connections of some saddle cycle, already conjectured in [Agliari A, Gardini L, Puu T. Some global bifurcations related to the appearance of closed invariant curves. Comput Math Simul 2005;68:201-19]. We refine the results obtained in such a paper, showing that the appearance/disappearance of closed invariant curves is not necessarily related to the existence of an attracting cycle. The characterization of the periodicity tongues (i.e. a region of the parameter space in which an attracting cycle exists) associated with a subcritical Neimark bifurcation is also discussed
Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25
The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop
Addor, Nans; Rohrer, Marco; Furrer, Reinhard; Seibert, Jan
2016-03-01
Bias adjustment methods usually do not account for the origins of biases in climate models and instead perform empirical adjustments. Biases in the synoptic circulation are for instance often overlooked when postprocessing regional climate model (RCM) simulations driven by general circulation models (GCMs). Yet considering atmospheric circulation helps to establish links between the synoptic and the regional scale, and thereby provides insights into the physical processes leading to RCM biases. Here we investigate how synoptic circulation biases impact regional climate simulations and influence our ability to mitigate biases in precipitation and temperature using quantile mapping. We considered 20 GCM-RCM combinations from the ENSEMBLES project and characterized the dominant atmospheric flow over the Alpine domain using circulation types. We report in particular a systematic overestimation of the frequency of westerly flow in winter. We show that it contributes to the generalized overestimation of winter precipitation over Switzerland, and this wet regional bias can be reduced by improving the simulation of synoptic circulation. We also demonstrate that statistical bias adjustment relying on quantile mapping is sensitive to circulation biases, which leads to residual errors in the postprocessed time series. Overall, decomposing GCM-RCM time series using circulation types reveals connections missed by analyses relying on monthly or seasonal values. Our results underscore the necessity to better diagnose process misrepresentation in climate models to progress with bias adjustment and impact modeling.
Huh, In; Cheon, Woo Young; Choi, Woo Young
2016-04-01
A subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory (SAT RAM) has been proposed and fabricated for low-power nonvolatile memory applications. The proposed SAT RAM cell demonstrates adjustable subthreshold swing (SS) depending on stored information: small SS in the erase state ("1" state) and large SS in the program state ("0" state). Thus, SAT RAM cells can achieve low read voltage (Vread) with a large memory window in addition to the effective suppression of ambipolar behavior. These unique features of the SAT RAM are originated from the locally stored charge, which modulates the tunneling barrier width (Wtun) of the source-to-channel tunneling junction.
Liu, Shengqiang; Li, Jie; Yu, Junsheng, E-mail: jsyu@uestc.edu.cn [State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Information, University of Electronic Science and Technology of China (UESTC), Chengdu 610054 (China); Du, Chunlei [Chengdu Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209 (China)
2015-07-27
A color tuning index (I{sub CT}) parameter for evaluating the color change capability of color-tunable organic light-emitting diodes (CT-OLEDs) was proposed and formulated. And a series of CT-OLEDs, consisting of five different carrier/exciton adjusting interlayers (C/EALs) inserted between two complementary emitting layers, were fabricated and applied to disclose the relationship between I{sub CT} and C/EALs. The result showed that the trend of electroluminescence spectra behavior in CT-OLEDs has good accordance with I{sub CT} values, indicating that the I{sub CT} parameter is feasible for the evaluation of color variation. Meanwhile, by changing energy level and C/EAL thickness, the optimized device with the widest color tuning range was based on N,N′-dicarbazolyl-3,5-benzene C/EAL, exhibiting the highest I{sub CT} value of 41.2%. Based on carrier quadratic hopping theory and exciton transfer model, two fitting I{sub CT} formulas derived from the highest occupied molecular orbital (HOMO) energy level and triplet energy level were simulated. Finally, a color tuning prediction (CTP) model was developed to deduce the I{sub CT} via C/EAL HOMO and triplet energy levels, and verified by the fabricated OLEDs with five different C/EALs. We believe that the CTP model assisted with I{sub CT} parameter will be helpful for fabricating high performance CT-OLEDs with a broad range of color tuning.
Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0
Yokoyama, K., E-mail: yokoyama.kenji09@jaea.go.jp; Ishikawa, M.
2015-01-15
The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on {sup 239}Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed.
Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0
The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on 239Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed
Model for U.S. Farm Financial Adjustment Analysis of Alternative Public Policies
Doye, Damona G.; Robert W Jolly
1987-01-01
As the agricultural sector adjusts to financial stress and constantly changing national and international policies, additional structural changes are expected. The capacity for adjustment through existing agricultural asset markets depends on both the extent of farm restructuring and the resiliency of the markets and agricultural institutions. Research is needed to estimate farm financial restructuring needs and the expected duration of the restructuring process. Projecting the magnitude of c...
Micro-econometric models for analysing capital adjustment on Dutch pig farms
Gardebroek, C.
2001-01-01
Farmers operate their business in a dynamic environment. Fluctuating prices, evolving agricultural and environmental policies, technological change and increasing consumer demands for product quality (e.g. with respect to environmental friendly production methods, animal welfare and food safety) frequently require adjustment of production and input levels on individual farms. Quantities of variable production factors like animal feed or pesticides can usually be adjusted easily together with ...
Improvement for Speech Signal based on Post Wiener Filter and Adjustable Beam-Former
Xiaorong Tong; Xiangfeng Meng
2013-01-01
In this study, a two-stage filter structure is introduced for speech enhancement. The first stage is an adjustable filter and sum beam-former with four-microphone array. The control of beam-forming filter is realized by adjusting only a single control variable. Different from the adaptive beam-forming filter, the proposed filter structure does not bring to any adaptive error noise, thus, it also does not bring the trouble to the second stage of the speech signal processing. The second stage o...
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
2015-12-01
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
Du, Yan; Xie, Shang-Ping
2008-04-01
The tropical Indian Ocean has been warming steadily since 1950s, a trend simulated by a large ensemble of climate models. In models, changes in net surface heat flux are small and the warming is trapped in the top 125 m depth. Analysis of the model output suggests the following quasi-equilibrium adjustments among various surface heat flux components. The warming is triggered by the greenhouse gas-induced increase in downward longwave radiation, amplified by the water vapor feedback and atmospheric adjustments such as weakened winds that act to suppress turbulent heat flux from the ocean. The sea surface temperature dependency of evaporation is the major damping mechanism. The simulated changes in surface solar radiation vary considerably among models and are highly correlated with inter-model variability in SST trend, illustrating the need to reduce uncertainties in cloud simulation.
Linear identification and model adjustment of a PEM fuel cell stack
Kunusch, C.; Puleston, P.F.; More, J.J. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A. [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M.A. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)
2008-07-15
In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)
M. Gaspar, Raquel; Murgoci, Agatha
2010-01-01
A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations o...
Hawkins, Amy L.; Haskett, Mary E.
2014-01-01
Background: Abused children's internal working models (IWM) of relationships are known to relate to their socioemotional adjustment, but mechanisms through which negative representations increase vulnerability to maladjustment have not been explored. We sought to expand the understanding of individual differences in IWM of abused children and…
Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems
Malin, Jane T.; Schreckenghost, Debra K.
2001-01-01
In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.
Seat Adjustment Design of an Intelligent Robotic Wheelchair Based on the Stewart Platform
Po Er Hsu
2013-03-01
Full Text Available A wheelchair user makes direct contact with the wheelchair seat, which serves as the interface between the user and the wheelchair, for much of any given day. Seat adjustment design is of crucial importance in providing proper seating posture and comfort. This paper presents a multiple‐DOF (degrees of freedom seat adjustment mechanism, which is intended to increase the independence of the wheelchair user while maintaining a concise structure, light weight, and intuitive control interface. This four‐axis Stewart platform is capable of heaving, pitching, and swaying to provide seat elevation, tilt‐in‐space, and sideways movement functions. The geometry and types of joints of this mechanism are carefully arranged so that only one actuator needs to be controlled, enabling the wheelchair user to adjust the seat by simply pressing a button. The seat is also equipped with soft pressure‐sensing pads to provide pressure management by adjusting the seat mechanism once continuous and concentrated pressure is detected. Finally, by comparing with the manual wheelchair, the proposed mechanism demonstrated the easier and more convenient operation with less effort for transfer assistance.
Adjustable Robust Optimizations with Decision Rules Based on Inexact Revealed Data
de Ruiter, F.J.C.T.; Ben-Tal, A.; Brekelmans, R.C.M.; den Hertog, D.
2014-01-01
Abstract: Adjustable robust optimization (ARO) is a technique to solve dynamic (multistage) optimization problems. In ARO, the decision in each stage is a function of the information accumulated from the previous periods on the values of the uncertain parameters. This information, however, is often
V. Naipal
2015-03-01
Full Text Available Large uncertainties exist in estimated rates and the extent of soil erosion by surface runoff on a global scale, and this limits our understanding of the global impact that soil erosion might have on agriculture and climate. The Revised Universal Soil Loss Equation (RUSLE model is due to its simple structure and empirical basis a frequently used tool in estimating average annual soil erosion rates at regional to global scales. However, large spatial scale applications often rely on coarse data input, which is not compatible with the local scale at which the model is parameterized. This study aimed at providing the first steps in improving the global applicability of the RUSLE model in order to derive more accurate global soil erosion rates. We adjusted the topographical and rainfall erosivity factors of the RUSLE model and compared the resulting soil erosion rates to extensive empirical databases on soil erosion from the USA and Europe. Adjusting the topographical factor required scaling of slope according to the fractal method, which resulted in improved topographical detail in a coarse resolution global digital elevation model. Applying the linear multiple regression method to adjust rainfall erosivity for various climate zones resulted in values that are in good comparison with high resolution erosivity data for different regions. However, this method needs to be extended to tropical climates, for which erosivity is biased due to the lack of high resolution erosivity data. After applying the adjusted and the unadjusted versions of the RUSLE model on a global scale we find that the adjusted RUSLE model not only shows a global higher mean soil erosion rate but also more variability in the soil erosion rates. Comparison to empirical datasets of the USA and Europe shows that the adjusted RUSLE model is able to decrease the very high erosion rates in hilly regions that are observed in the unadjusted RUSLE model results. Although there are still
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Koo, Jae-Eun; Lee, Gwang-Uk
2015-06-01
This study puts its purpose in identifying the effect of the participation in physical activity-based recreation programs on the optimism of children, humor styles, and school life adjustment. To achieve the study purpose, this study selected 190 subjects as samples were extracted targeting senior students of elementary schools who participated in the physical activity-based recreation in the metropolitan areas as of 2014. As research methods, questionnaire papers were used and reliability analysis, factor analysis, correlation analysis, and multiple regression analysis were conducted by utilizing SPSS 18.0 after inputting analysis data into the computer. The study results, obtained in this study are as follows: First, in terms of the effect of the participation in physical activity-based recreation programs on optimism, participation frequency and participation intensity would have an effect on optimism, while participation period would have a significant effect on being positive among the sub-factors of optimism. Second, participation in physical activity-based recreation programs might have a significant effect on humor styles. Third, in terms of the effect of the participation in physical activity-based recreation programs on the school life adjustment, it was demonstrated that participation period and participation intensity would have a significant effect on school life adjustment, while participation frequency would have a significant effect on regulation-observance and school life satisfaction. PMID:26171384
Data Assimilation and Adjusted Spherical Harmonic Model of VTEC Map over Thailand
Klinngam, Somjai; Maruyama, Takashi; Tsugawa, Takuya; Ishii, Mamoru; Supnithi, Pornchai; Chiablaem, Athiwat
2016-07-01
The global navigation satellite system (GNSS) and high frequency (HF) communication are vulnerable to the ionospheric irregularities, especially when the signal travels through the low-latitude region and around the magnetic equator known as equatorial ionization anomaly (EIA) region. In order to study the ionospheric effects to the communications performance in this region, the regional map of the observed total electron content (TEC) can show the characteristic and irregularities of the ionosphere. In this work, we develop the two-dimensional (2D) map of vertical TEC (VTEC) over Thailand using the adjusted spherical harmonic model (ASHM) and the data assimilation technique. We calculate the VTEC from the receiver independent exchange (RINEX) files recorded by the dual-frequency global positioning system (GPS) receivers on July 8th, 2012 (quiet day) at 12 stations around Thailand: 0° to 25°E and 95°N to 110°N. These stations are managed by Department of Public Works and Town & Country Planning (DPT), Thailand, and the South East Asia Low-latitude ionospheric Network (SEALION) project operated by National Institute of Information and Communications Technology (NICT), Japan, and King Mongkut's Institute of Technology Ladkrabang (KMITL). We compute the median observed VTEC (OBS-VTEC) in the grids with the spatial resolution of 2.5°x5° in latitude and longitude and time resolution of 2 hours. We assimilate the OBS-VTEC with the estimated VTEC from the International Reference Ionosphere model (IRI-VTEC) as well as the ionosphere map exchange (IONEX) files provided by the International GNSS Service (IGS-VTEC). The results show that the estimation of the 15-degree ASHM can be improved when both of IRI-VTEC and IGS-VTEC are weighted by the latitude-dependent factors before assimilating with the OBS-VTEC. However, the IRI-VTEC assimilation can improve the ASHM estimation more than the IGS-VTEC assimilation. Acknowledgment: This work is partially funded by the
Ali P. Yunus; Ram Avtar; Steven Kraines; Masumi Yamamuro; Fredrik Lindberg; C. S. B. Grimmond
2016-01-01
Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth asse...
Ali P. Yunus; Avtar, Ram; Kraines, Steven; Yamamuro, Masumi; Lindberg, Fredrik; C. S. B. Grimmond
2016-01-01
Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR ﬂooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water ﬂooding for London boroughs were quantiﬁed based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC...
Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J
2016-07-01
The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment. PMID:26759225
Daniel Mizak; Anthony Stair; John Neral
2007-01-01
This paper introduces an index called the adjusted churn, designed to measure competitive balance in sports leagues based on changes in team standings over time. This is a simple yet powerful index that varies between zero and one. A value of zero indicates no change in league standings from year to year and therefore minimal competitive balance. A value of one indicates the maximum possible turnover in league standings from year to year and therefore a high level of competitive balance. Appl...
The self-adjusting file (SAF) system: An evidence-based update
Zvi Metzger
2014-01-01
Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusion They may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi fi...
He, Peng; Eriksson, Frank; Scheike, Thomas H.; Zhang, Mei Jie
2016-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the...... covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks....
Zhu, Jianjun; Fan, Donghao; Zhou, Cui; Zhou, Jinghong
2015-01-01
The process of super resolution image reconstruction is such a process that multiple observations are taken on the same target to obtain low resolution images, then the low resolution images are used to reconstruct the real image of the target, namely high resolution image. This process is similar to that in the field of surveying and mapping, in which the same target is observed repeatedly and the optimal values is calculated with surveying adjustment methods. In this paper, the method of su...
Sergio Ulgiati; Xinshi Zhang; Baohua Yu; Xiaobin Dong; Yufang Zhang; Weijia Cui; Bin Xun
2011-01-01
The emergy concept, integrated with a multi-objective linear programming method, was used to model the agricultural structure of Xinjiang Uygur Autonomous Region under the consideration of the need to develop a low-carbon economy. The emergy indices before and after the structural optimization were evaluated. In the reconstructed model, the proportions of agriculture, forestry and artificial grassland should be adjusted from 19:2:1 to 5.2:1:2.5; the Emergy Yield Ratio (1.48) was higher than t...
The aim of the interlaboratory REAL-80 exercise, organized by the International Atomic Energy Agency, was to determine the state of the art in 1981 of the capabilities of laboratories to adjust neutron spectrum information on the basis of a set of experimental activation rates, and to subsequently predict the number of displacements in steel, together with its uncertainty. The input information distributed to participating laboratories comprised values, variances, and covariances for a set of input fluence rates, for a set of activation and damage cross-section data, and for a set of experimentally measured reaction rates. The exercise dealt with two clearly different spectra: the thermal Oak Ridge Research Reactor (ORR) spectrum and the fast YAYOI spectrum. Out of 30 laboratories asked to participate, 13 laboratories contributed 33 solutions for ORR and 35 solutions for YAYOI. The spectral shapes of the solution spectra showed considerable spread, both for the ORR and YAYOI spectra. When the series of predicted activation rates in nickel and the predicted displacement rates in steel derived for all solutions is considered, one cannot observe significant differences due to the adjustment algorithm used. The largest deviations seem to be due to effects related to group structure and/or changes in the input data. When comparing the predicted activation rate in nickel with its available measured value, the authors observe that the predicted value (averaged over all solutions) is lower than the measured value
Zijp, W.L.; Nolthenius, H.J.; Szondi, E.J.; Verhaag, G.C.; Zsolnay, E.M.
1984-11-01
The aim of the interlaboratory REAL-80 exercise, organized by the International Atomic Energy Agency, was to determine the state of the art in 1981 of the capabilities of laboratories to adjust neutron spectrum information on the basis of a set of experimental activation rates, and to subsequently predict the number of displacements in steel, together with its uncertainty. The input information distributed to participating laboratories comprised values, variances, and covariances for a set of input fluence rates, for a set of activation and damage cross-section data, and for a set of experimentally measured reaction rates. The exercise dealt with two clearly different spectra: the thermal Oak Ridge Research Reactor (ORR) spectrum and the fast YAYOI spectrum. Out of 30 laboratories asked to participate, 13 laboratories contributed 33 solutions for ORR and 35 solutions for YAYOI. The spectral shapes of the solution spectra showed considerable spread, both for the ORR and YAYOI spectra. When the series of predicted activation rates in nickel and the predicted displacement rates in steel derived for all solutions is considered, one cannot observe significant differences due to the adjustment algorithm used. The largest deviations seem to be due to effects related to group structure and/or changes in the input data. When comparing the predicted activation rate in nickel with its available measured value, the authors observe that the predicted value (averaged over all solutions) is lower than the measured value.
Flor-Serra Ferran
2009-06-01
Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression
One aim of this invention is the fabrication of a storage tube with a base adjustable for height by remote handling, for the in-pools storing of irradiated nuclear fuels. This device possesses the following features with respect to the mechanism for placing the base in the supporting position: - use of rotation without rectilinear friction or gears, - impossibility for dust to accumulate on the mechanism, - possible control by handling pole, - simplicity and low mass production cost. Such features can of course be used to advantage for the tubes to store elements of various lengths irrespective of the nuclear energy
Dynamic Online Bandwidth Adjustment Scheme Based on Kalai-Smorodinsky Bargaining Solution
Kim, Sungwook
Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.
Adjustment and Prediction of Machine Factors Based on Neural Artificial Intelligence
Since the discovery of x-ray, it is use in examination has become an integral part of medical diagnostic radiology. The use of X-ray is harmful to human beings but recent technological advances and regulatory constraints have made the medical Xray much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict x-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.
Adjustment and Prediction of X-Ray Machine Factors Based on Neural Artificial Inculcating
Since the discovery of X-rays, their use in examination has become an integral part of medical diagnostic radiology. The use of X-rays is harmful to human beings but recent technological advances and regulatory constraints have made the medical X-rays much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict X-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.
Heimann, Tobias; Delingette, Hervé
2011-01-01
This chapter starts with a brief introduction into model-based segmentation, explaining the basic concepts and different approaches. Subsequently, two segmentation approaches are presented in more detail: First, the method of deformable simplex meshes is described, explaining the special properties of the simplex mesh and the formulation of the internal forces. Common choices for image forces are presented, and how to evolve the mesh to adapt to certain structures. Second, the method of point...
Repatriation Adjustment: Literature Review
Gamze Arman
2009-12-01
Full Text Available Expatriation is a widely studied area of research in work and organizational psychology. After expatriates accomplish their missions in host countries, they return to their countries and this process is called repatriation. Adjustment constitutes a crucial part in repatriation research. In the present literature review, research about repatriation adjustment was reviewed with the aim of defining the whole picture in this phenomenon. Present research was classified on the basis of a theoretical model of repatriation adjustment. Basic frame consisted of antecedents, adjustment, outcomes as main variables and personal characteristics/coping strategies and organizational strategies as moderating variables.
Constructing seasonally adjusted data with time-varying confidence intervals
Koopman, Siem Jan; Franses, Philip Hans
2001-01-01
textabstractSeasonal adjustment methods transform observed time series data into estimated data, where these estimated data are constructed such that they show no or almost no seasonal variation. An advantage of model-based methods is that these can provide confidence intervals around the seasonally adjusted data. One particularly useful time series model for seasonal adjustment is the basic structural time series [BSM] model. The usual premise of the BSM is that the variance of each of the c...
“A model of mother-child Adjustment in Arab Muslim Immigrants to the US”
Hough, Edythe S.; Templin, Thomas N.; Kulwicki, Anahid; Ramaswamy, Vidya; Katz, Anne
2009-01-01
We examined the mother-child adjustment and child behavior problems in Arab Muslim immigrant families residing in the U.S.A. The sample of 635 mother-child dyads was comprised of mothers who emigrated from 1989 or later and had at least one early adolescent child between the ages of 11 to 15 years old who was also willing to participate. Arabic speaking research assistants collected the data from the mothers and children using established measures of maternal and child stressors, coping, and ...
The Design of Fiscal Adjustments
Alesina, Alberto Francesco; Ardagna, Silvia
2013-01-01
This paper offers three results. First, in line with the previous literature, we confirm that fiscal adjustments based mostly on the spending side are less likely to be reversed. Second, spending based fiscal adjustments have caused smaller recessions than tax based fiscal adjustments. Finally, certain combinations of policies have made it possible for spending based fiscal adjustments to be associated with growth in the economy even on impact rather than with a recession. Thus, expansionary ...
American Psychiatric Association. Diagnostic and statistical manual of mental disorders . 5th ed. Arlington, VA: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Fava ...
American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...
Brehler, Michael; Görres, Joseph; Wolf, Ivo; Franke, Jochen; von Recum, Jan; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana
2014-03-01
Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Tynes, Brendesha M.; Rose, Chad A.; Hiss, Sophia; Umaña-Taylor, Adriana J.; Mitchell, Kimberly; Williams, David
2015-01-01
Given the recent rise in online hate activity and the increased amount of time adolescents spend with media, more research is needed on their experiences with racial discrimination in virtual environments. This cross-sectional study examines the association between amount of time spent online, traditional and online racial discrimination and adolescent adjustment, including depressive symptoms, anxiety and externalizing behaviors. The study also explores the role that social identities, including race and gender, play in these associations. Online surveys were administered to 627 sixth through twelfth graders in K-8, middle and high schools. Multiple regression results revealed that discrimination online was associated with all three outcome variables. Additionally, a significant interaction between online discrimination by time online was found for externalizing behaviors indicating that increased time online and higher levels of online discrimination are associated with more problem behavior. This study highlights the need for clinicians, educational professionals and researchers to attend to race-related experiences online as well as in traditional environments. PMID:27134698
Research on Gear-box Fault Diagnosis Method Based on Adjusting-learning-rate PSO Neural Network
PAN Hong-xia; MA Qing-feng
2006-01-01
Based on the research of Particle Swarm Optimization (PSO) learning rate, two learning rates are changed linearly with velocity-formula evolving in order to adjust the proportion of social part and cognitional part; then the methods are applied to BP neural network training, the convergence rate is heavily accelerated and locally optional solution is avoided. According to actual data of two levels compound-box in vibration lab, signals are analyzed and their characteristic values are abstracted. By applying the trained BP neural networks to compound-box fault diagnosis, it is indicated that the methods are sound effective.
Diego Pagung Ambrosini
2014-07-01
Full Text Available The objective of this study was to evaluate the genotype-environment interaction (GEI in the body weight adjusted to 550 days of age (W550 of Polled Nellore cattle raised in Northeastern Brazil using reaction norms (RN models. Hierarchical RN models included fixed effects for age of cow (linear and quadratic and random effects for contemporary groups (CG and additive genetic RN level and slope. Four RN hierarchical models (RNHM were used. The RNHM2S uses the solutions of contemporary groups estimated by the standard animal model (AM and considers them as environmental level for predicting the reaction norms and the RNHM1S, which jointly estimate these two sets of unknowns. Two versions were considered for both models, one with a homogeneous (Hm and another with a heterogeneous (He residual variance. The one-step homogeneous residual variance model (RNHM1SHm offered better adjustment to the data when compared with other models. For the RNHM1SHm model, estimates of additive genetic variance and heritability increased with environment improvement (260.75±75.80 kg2 to 4298.39±356.56 kg2 and 0.22±0.05 to 0.82±0.01, for low- and high-performance environments, respectively. High correlation (0.97±0.01 between the intercept and the slope of RN shows that animals with higher genetic values respond better to environment improvement. In the evaluation of breeding sires with higher genetic values in the various environments using Spearman's correlation, values between 0 and 0.98 were observed, pointing to high reclassification, especially among genetic values obtained by the animal model in comparison with those obtained via RNHM1SHm. The existence of GEI is confirmed, and so is the need for specific evaluations for low, medium and high level production environments.
Lucyna Brzozowska
2013-04-01
Full Text Available Computational efficiency of model is a key factor which determines its practical applica-tion. The paper presents the algorithms which ensure the computational efficiency of a model of the air velocity field. The main step of modelling the air velocity field by means of the diagnostic model is the procedure of adjusting the initial field. The initial wind field is computed by interpolation of data from meteorological stations. The goal of adjusting the initial field is to ensure that the air velocity field satisfies the continuity equation in an area with a complex landform. The task is reduced to solving the Poisson equation. Finite difference methods with equidistant and non-equidistant nodes are applied. The discretisation net must be properly dense for a complex terrain. For an equidistant net this means that the computing time is extended and a numerical simulation might not be efficient. This problem can be reduced by using a non-equidistant mesh, in which the nodes near the places where we expect a significant change in the air velocity are condensed. In this paper the non-equidistant net is adapted for an example of terrain with an isolated hill. The hybrid approach is proposed in this work. The parabolic function for node distribution is used in the horizontal direction when in the vertical direction the Chebyshev nodes are applied. The results of the numerical analysis show the usefulness of a non-equidistant net in terms of accuracy and computational effectiveness.
João Pedro Velho
2014-09-01
Full Text Available In the present work, with whole plant silage corn at different stages of maturity, aimed to evaluate the mathematical models Exponential, France, Gompertz and Logistic to study the kinetics of gas production in vitro incubations for 24 and 48 hours. A semi-automated in vitro gas production technique was used during one, three, six, eight, ten, 12, 14, 16, 22, 24, 31, 36, 42 and 48 hours of incubation periods. Model adjustment was evaluated by means of mean square of error, mean bias, root mean square prediction error and residual error. The Gompertz mathematical model allowed the best adjustment to describe the gas production kinetics of maize silages, regardless of incubation period. The France model was not adequate to describe gas kinetics of incubation periods equal or lower than 48 hours. The in vitro gas production technique was efficient to detect differences in nutritional value of maize silages from different growth stages. Twenty four hours in vitro incubation periods do not mask treatment effects, whilst 48 hour periods are inadequate to measure silage digestibility.
Ratios as a size adjustment in morphometrics.
Albrecht, G H; Gelvin, B R; Hartman, S E
1993-08-01
Simple ratios in which a measurement variable is divided by a size variable are commonly used but known to be inadequate for eliminating size correlations from morphometric data. Deficiencies in the simple ratio can be alleviated by incorporating regression coefficients describing the bivariate relationship between the measurement and size variables. Recommendations have included: 1) subtracting the regression intercept to force the bivariate relationship through the origin (intercept-adjusted ratios); 2) exponentiating either the measurement or the size variable using an allometry coefficient to achieve linearity (allometrically adjusted ratios); or 3) both subtracting the intercept and exponentiating (fully adjusted ratios). These three strategies for deriving size-adjusted ratios imply different data models for describing the bivariate relationship between the measurement and size variables (i.e., the linear, simple allometric, and full allometric models, respectively). Algebraic rearrangement of the equation associated with each data model leads to a correctly formulated adjusted ratio whose expected value is constant (i.e., size correlation is eliminated). Alternatively, simple algebra can be used to derive an expected value function for assessing whether any proposed ratio formula is effective in eliminating size correlations. Some published ratio adjustments were incorrectly formulated as indicated by expected values that remain a function of size after ratio transformation. Regression coefficients incorporated into adjusted ratios must be estimated using least-squares regression of the measurement variable on the size variable. Use of parameters estimated by any other regression technique (e.g., major axis or reduced major axis) results in residual correlations between size and the adjusted measurement variable. Correctly formulated adjusted ratios, whose parameters are estimated by least-squares methods, do control for size correlations. The size-adjusted
Pervasive Computing Location-aware Model Based on Ontology
PU Fang; CAI Hai-bin; CAO Qi-ying; SUN Dao-qing; LI Tong
2008-01-01
In order to integrate heterogeneous location-aware systems into pervasive computing environment, a novel pervasive computing location-aware model based on ontology is presented. A location-aware model ontology (LMO) is constructed. The location-aware model has the capabilities of sharing knowledge, reasoning and adjusting the usage policies of services dynamically through a unified semantic location manner. At last, the work process of our proposed location-aware model is explained by an application scenario.
Mehran, Ali [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; AghaKouchak, Amir [Univ. of California, Irvine, CA (United States). Dept. of Civil and Environmental Engineering; Phillips, Thomas J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-02-25
Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but that their replication of observed precipitation over arid regions and certain sub-continental regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (e.g., the 75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g. western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, inter-model variations in bias over Australia and Amazonia are considerable. The Quantile Bias (QB) analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. Lastly, we found that a simple mean-field bias removal improves the overall B and VHI values, but does not make a significant improvement in these model performance metrics at high quantiles of precipitation.
Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik
2009-01-01
In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942