Li, Yan; Shuai, Zhikang; Xu, Qinming
Droop control framework with an adjustable virtual impedance loop is proposed in this paper, which is based on the cloud model theory. The proposed virtual impedance loop includes two terms: a negative virtual resistor and an adjustable virtual inductance. The negative virtual resistor term...
Murgoci, Agatha; Gaspar, Raquel M.
. As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...... formulas. Concretely for LIBOR in arrears (LIA) contracts, we derive the system of Riccatti ODE-s one needs to compute to obtain the exact adjustment. Based upon the ideas of Schrager and Pelsser (2006) we are also able to derive general swap adjustments useful, in particular, when dealing with constant...
He, Y.X.; Yang, L.Y.; Wang, Y.J.; Wang, J.; Zhang, S.L.
In recent years, coal price has risen rapidly, which has also brought a sharp increase in the expenditures of thermal power plants in China. Meantime, the power production price and power retail price have not been adjusted accordingly and a large number of thermal power plants have incurred losses. The power industry is a key industry in the national economy. As such, a thorough analysis and evaluation of the economic influence of the electricity price should be conducted before electricity price adjustment is carried out. This paper analyses the influence of coal price adjustment on the electric power industry, and the influence of electricity price adjustment on the macroeconomy in China based on computable general equilibrium models. The conclusions are as follows: (1) a coal price increase causes a rise in the cost of the electric power industry, but the influence gradually descends with increase in coal price; and (2) an electricity price increase has an adverse influence on the total output, Gross Domestic Product (GDP), and the Consumer Price Index (CPI). Electricity price increases have a contractionary effect on economic development and, consequently, electricity price policy making must consequently consider all factors to minimize their adverse influence. (author)
Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van
Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions
Bender, S.; Miller, W. P.; Bernard, B.; Stokes, M.; Oaida, C. M.; Painter, T. H.
Water management agencies rely on hydrologic forecasts issued by operational agencies such as NOAA's Colorado Basin River Forecast Center (CBRFC). The CBRFC has partnered with the Jet Propulsion Laboratory (JPL) under funding from NASA to incorporate research-oriented, remotely-sensed snow data into CBRFC operations and to improve the accuracy of CBRFC forecasts. The partnership has yielded valuable analysis of snow surface albedo as represented in JPL's MODIS Dust Radiative Forcing in Snow (MODDRFS) data, across the CBRFC's area of responsibility. When dust layers within a snowpack emerge, reducing the snow surface albedo, the snowmelt rate may accelerate. The CBRFC operational snow model (SNOW17) is a temperature-index model that lacks explicit representation of snowpack surface albedo. CBRFC forecasters monitor MODDRFS data for emerging dust layers and may manually adjust SNOW17 melt rates. A technique was needed for efficient and objective incorporation of the MODDRFS data into SNOW17. Initial development focused in Colorado, where dust-on-snow events frequently occur. CBRFC forecasters used retrospective JPL-CBRFC analysis and developed a quantitative relationship between MODDRFS data and mean areal temperature (MAT) data. The relationship was used to generate adjusted, MODDRFS-informed input for SNOW17. Impacts of the MODDRFS-SNOW17 MAT adjustment method on snowmelt-driven streamflow prediction varied spatially and with characteristics of the dust deposition events. The largest improvements occurred in southwestern Colorado, in years with intense dust deposition events. Application of the method in other regions of Colorado and in "low dust" years resulted in minimal impact. The MODDRFS-SNOW17 MAT technique will be implemented in CBRFC operations in late 2015, prior to spring 2016 runoff. Collaborative investigation of remote sensing-based adjustment methods for the CBRFC operational hydrologic forecasting environment will continue over the next several years.
Chen, R.; Sun, Y. Y.; Lei, Y.
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng
Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.
Ph.H.B.F. Franses (Philip Hans); R. Legerstee (Rianne)
textabstractExperts may have domain-specific knowledge that is not included in a statistical model and that can improve forecasts. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional
Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao
Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Martinez, Santiago; Garcia-Haro, Juan Miguel; Victores, Juan G; Jardon, Alberto; Balaguer, Carlos
The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force-torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics.
Department of Mechanical Engineering, Imperial College of Science, .... It is first necessary to decide upon the level of accuracy, or correctness which is sought from the adjustment of the initial model, and this will be heavily influenced by the eventual application of the ..... reviewing the degree of success attained.
Ando, Hirotaka; Izawa, Shigeru; Hori, Wataru; Nakagawa, Ippei
Background There are various methods for predicting human pharmacokinetics. Among these, a whole body physiologically-based pharmacokinetic (WBPBPK) model is useful because it gives a mechanistic description. However, WBPBPK models cannot predict human pharmacokinetics with enough precision. This study was conducted to elucidate the primary reason for poor predictions by WBPBPK models, and to enable better predictions to be made without reliance on complex concepts. Methods The primary reasons for poor predictions of human pharmacokinetics were investigated using a generic WBPBPK model that incorporated a single adjusting compartment (SAC), a virtual organ compartment with physiological parameters that can be adjusted arbitrarily. The blood flow rate, organ volume, and the steady state tissue-plasma partition coefficient of a SAC were calculated to fit simulated to observed pharmacokinetics in the rat. The adjusted SAC parameters were fixed and scaled up to the human using a newly developed equation. Using the scaled-up SAC parameters, human pharmacokinetics were simulated and each pharmacokinetic parameter was calculated. These simulated parameters were compared to the observed data. Simulations were performed to confirm the relationship between the precision of prediction and the number of tissue compartments, including a SAC. Results Increasing the number of tissue compartments led to an improvement of the average-fold error (AFE) of total body clearances (CLtot) and half-lives (T1/2) calculated from the simulated human blood concentrations of 14 drugs. The presence of a SAC also improved the AFE values of a ten-organ model from 6.74 to 1.56 in CLtot, and from 4.74 to 1.48 in T1/2. Moreover, the within-2-fold errors were improved in all models; incorporating a SAC gave results from 0 to 79% in CLtot, and from 14 to 93% in T1/2 of the ten-organ model. Conclusion By using a SAC in this study, we were able to show that poor prediction resulted mainly from such
Full Text Available The major challenge in 3D electronic printing is the print resolution and accuracy. In this paper, a typical mode - lumped element modeling method (LEM - is adopted to simulate the droplet jetting characteristic. This modeling method can quickly get the droplet velocity and volume with a high accuracy. Experimental results show that LEM has a simpler structure with the sufficient simulation and prediction accuracy.
Hutto, Richard L
The most popular method used to gain an understanding of population trends or of differences in bird abundance among land condition categories is to use information derived from point counts. Unfortunately, various factors can affect one's ability to detect birds, and those factors need to be controlled or accounted for so that any difference in one's index among time periods or locations is an accurate reflection of differences in bird abundance and not differences in detectability. Avian ecologists could use appropriately sized fixed-area surveys to minimize the chance that they might be deceived by distance-based detectability bias, but the current method of choice is to use a modeling approach that allows one to account for distance-based bias by modeling the effects of distance on detectability or occupancy. I challenge the idea that modeling is the best approach to account for distance-based effects on the detectability of birds because the most important distance-based modeling assumptions can never be met. The use of a fixed-area survey method to generate an index of abundance is the simplest way to control for distance-based detectability bias and should not be universally condemned or be the basis for outright rejection in the publication process. © 2016 by the Ecological Society of America.
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
New high-temperature superconductor (HTSC) technology may allow development of an energy-efficient power electronics switch for adjustable speed drive (ASD) applications involving variable-speed motors, superconducting magnetic energy storage systems, and other power conversion equipment. This project developed a motor simulation module for determining optimal applications of HTSC-based power switches in ASD systems
Duffy, P.; Bell, J.; Covey, C.; Sloan, L.
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability
Premium adjustment: actuarial analysis on epidemiological models. DEA Omorogbe, SO Edobor. Abstract. In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes ...
While the model is defined in terms of these spatial parameters, .... (mode shapes defined at the n DOFs of a typical modal test in place of the complete N DOFs .... In these expressions,. N И the number of degrees of freedom in the model, while N1 and N2 are the numbers of mass and stiffness elements to be corrected ...
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices
This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation
Full Text Available Abstract After our work was published, we found that some of the terms in the equations were incorrect and that there were some typographical errors in the abbreviations. In the section 'Single adjusting compartment' in Materials and Methods, VS should be VSAC. In the last paragraph of Results, QSAC should be QSAC. The correct equations are included in this article. These corrections will not affect the results of this study.
Borup, Morten; Grum, Morten; Linde, Jens Jørgen
overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5–30 min of rain data recorded by multiple rain gauges and propagating the rainfall......, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10–20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2–3 km away....
L.M. Lamers (Leida)
textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness
Ding, Feng; Yang, Xianhai; Chen, Guosong; Liu, Jining; Shi, Lili; Chen, Jingwen
The partition coefficients between bovine serum albumin (BSA) and water (K BSA/w ) for ionogenic organic chemicals (IOCs) were different greatly from those of neutral organic chemicals (NOCs). For NOCs, several excellent models were developed to predict their logK BSA/w . However, it was found that the conventional descriptors are inappropriate for modeling logK BSA/w of IOCs. Thus, alternative approaches are urgently needed to develop predictive models for K BSA/w of IOCs. In this study, molecular descriptors that can be used to characterize the ionization effects (e.g. chemical form adjusted descriptors) were calculated and used to develop predictive models for logK BSA/w of IOCs. The models developed had high goodness-of-fit, robustness, and predictive ability. The predictor variables selected to construct the models included the chemical form adjusted averages of the negative potentials on the molecular surface (V s-adj - ), the chemical form adjusted molecular dipole moment (dipolemoment adj ), the logarithm of the n-octanol/water distribution coefficient (logD). As these molecular descriptors can be calculated from their molecular structures directly, the developed model can be easily used to fill the logK BSA/w data gap for other IOCs within the applicability domain. Furthermore, the chemical form adjusted descriptors calculated in this study also could be used to construct predictive models on other endpoints of IOCs. Copyright © 2017 Elsevier Inc. All rights reserved.
Grosz, Balázs; Well, Reinhard; Dannenmann, Michael; Dechow, René; Kitzler, Barbara; Michel, Kerstin; Reent Köster, Jan
data-sets are needed in view of the extreme spatio-temporal heterogeneity of denitrification. DASIM will provide such data based on laboratory incubations including measurement of N2O and N2 fluxes and determination of the relevant drivers. Here, we present how we will use these data to evaluate common biogeochemical process models (DailyDayCent, Coup) with respect to modeled NO, N2O and N2 fluxes from denitrification. The models are used with different settings. The first approximation is the basic "factory" setting of the models. The next step would show the precision in the results of the modeling after adjusting the appropriate parameters from the result of the measurement values and the "factory" results. The better adjustment and the well-controlled input and output measured parameters could provide a better understanding of the probable scantiness of the tested models which will be a basis for future model improvement.
Ion Gh. Rosca
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the study of the R.M. Solow adjusted model of economic growth, while the adjustment consisting in the model adaptation to the Romanian economic characteristics. The article is the first one from a three paper series dedicated to the macroeconomic modelling theme, using the R.M. Solow model, such as: “Measurement of the economic growth and extensions of the R.M. Solow adjusted model” and “Evolution scenarios at the Romanian economy level using the R.M. Solow adjusted model”. The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Andersen, M E; Sarangapani, R; Frederick, C B; Kimbell, J S
Cells within the epithelial lining of the nasal cavity metabolize a variety of low-molecular-weight, volatile xenobiotics. In common with terminology developed for other metabolizing organs, the nose extracts these chemicals from the airstream, thereby clearing some portion of the total nasal airflow. In this article, a physiologically based clearance-extraction (PBCE) model of nasal metabolism is used to predict extraction for steady-state conditions. This model, developed by simplification of existing physiologically based pharmacokinetic (PBPK) nasal models, has three tissue regions in two flow paths. A dorsal flow stream sequentially passes over a small area of respiratory epithelium and then over the entire olfactory epithelial surface within the nose. A ventral airstream, consisting of most of the total flow, passes over the larger portion (>80%) of the respiratory epithelium. Each underlying tissue stack has a mucus layer, an epithelial tissue compartment, and a blood exchange region. Metabolism may occur in any of the subcompartments within the tissue stacks. The model, solved directly for a steady-state condition, specifies the volumetric airflow over each stack. Computational fluid dynamic (CFD) solutions for the rat and human for the case with no liquid-phase resistance provided a maximum value for regional extraction, E(max)'. Equivalent air-to-liquid phase permeation coefficients (also referred to as the air-phase mass transfer coefficient) were calculated based on these E(max)' values. The PBCE model was applied to assess expected species differences in nasal extraction and in localized tissue metabolism of methyl methacrylate (MMA) in rats and in humans. Model estimates of tissue dose of MMA metabolites (in micromol metabolized/h/ml tissue) in both species were used to evaluate the dosimetric adjustment factor (DAF) that should be applied in reference concentration (RfC) calculations for MMA. For human ventilation rates equivalent to light exercise
König, Volker; Kolzter, Olaf; Albuszies, Gerd; Thölen, Frank
Inpatient administrative data from hospitals is already used nationally and internationally in many areas of internal and public quality assurance in healthcare. For sepsis as the principal condition, only a few published approaches are available for Germany. The aim of this investigation is to identify factors influencing hospital mortality by employing appropriate analytical methods in order to improve the internal quality management of sepsis. The analysis was based on data from 754,727 DRG cases of the CLINOTEL hospital network charged in 2015. The association then included 45 hospitals of all supply levels with the exception of university hospitals (range of beds: 100 to 1,172 per hospital). Cases of sepsis were identified via the ICD codes of their principal diagnosis. Multiple logistic regression analysis was used to determine the factors influencing in-hospital lethality for this population. The model was developed using sociodemographic and other potential variables that could be derived from the DRG data set, and taking into account current literature data. The model obtained was validated with inpatient administrative data of 2016 (51 hospitals, 850,776 DRG cases). Following the definition of the inclusion criteria, 5,608 cases of sepsis (2016: 6,384 cases) were identified in 2015. A total of 12 significant and, over both years, stable factors were identified, including age, severity of sepsis, way of hospital admission and various comorbidities. The AUC value of the model, as a measure of predictability, is above 0.8 (H-L test p>0.05, R 2 value=0.27), which is an excellent result. The CLINOTEL model of risk adjustment for in-hospital lethality can be used to determine the mortality probability of patients with sepsis as principal diagnosis with a very high degree of accuracy, taking into account the case mix. Further studies are needed to confirm whether the model presented here will prove its value in the internal quality assurance of hospitals
Siccardi, Marco; Olagunju, Adeniyi; Seden, Kay; Ebrahimjee, Farid; Rannard, Steve; Back, David; Owen, Andrew
To treat malaria, HIV-infected patients normally receive artemether (80 mg twice daily) concurrently with antiretroviral therapy and drug-drug interactions can potentially occur. Artemether is a substrate of CYP3A4 and CYP2B6, antiretrovirals such as efavirenz induce these enzymes and have the potential to reduce artemether pharmacokinetic exposure. The aim of this study was to develop an in vitro in vivo extrapolation (IVIVE) approach to model the interaction between efavirenz and artemether. Artemether dose adjustments were then simulated in order to predict optimal dosing in co-infected patients and inform future interaction study design. In vitro data describing the chemical properties, absorption, distribution, metabolism and elimination of efavirenz and artemether were obtained from published literature and included in a physiologically based pharmacokinetic model (PBPK) to predict drug disposition simulating virtual clinical trials. Administration of efavirenz and artemether, alone or in combination, were simulated to mirror previous clinical studies and facilitate validation of the model and realistic interpretation of the simulation. Efavirenz (600 mg once daily) was administered to 50 virtual subjects for 14 days. This was followed by concomitant administration of artemether (80 mg eight hourly) for the first two doses and 80 mg (twice daily) for another two days. Simulated pharmacokinetics and the drug-drug interaction were in concordance with available clinical data. Efavirenz induced first pass metabolism and hepatic clearance, reducing artemether Cmax by 60% and AUC by 80%. Dose increases of artemether, to correct for the interaction, were simulated and a dose of 240 mg was predicted to be sufficient to overcome the interaction and allow therapeutic plasma concentrations of artemether. The model presented here provides a rational platform to inform the design for a clinical drug interaction study that may save time and resource while the optimal
Background: Whereas therapy for HIV is dependent on level of creatinine clearance, most laboratories locally only report an absolute creatinine value. There is likelihood that the patients already on antiretroviral therapy (ART) may have required dosage adjustment at the time of initiation of therapy or sometime during ...
Missura, Olana; Gärtner, Thomas
In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.
Bailit, Jennifer L; Grobman, William A; Rice, Madeline Murguia; Spong, Catherine Y; Wapner, Ronald J; Varner, Michael W; Thorp, John M; Leveno, Kenneth J; Caritis, Steve N; Shubert, Phillip J; Tita, Alan T; Saade, George; Sorokin, Yoram; Rouse, Dwight J; Blackwell, Sean C; Tolosa, Jorge E; Van Dorsten, J Peter
Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take preexisting patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for 5 obstetric outcomes and assess hospital performance across these outcomes. We studied a cohort of 115,502 women and their neonates born in 25 hospitals in the United States from March 2008 through February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Venous thromboembolism occurred too infrequently (0.03%; 95% confidence interval [CI], 0.02-0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage, 2.29%; 95% CI, 2.20-2.38, peripartum infection, 5.06%; 95% CI, 4.93-5.19, severe perineal laceration at spontaneous vaginal delivery, 2.16%; 95% CI, 2.06-2.27, neonatal composite, 2.73%; 95% CI, 2.63-2.84). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital-adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. Copyright © 2013 Mosby, Inc. All rights reserved.
Full Text Available Microinjection is a promising tool for microdroplet generation, while the microinjection for microdroplets generation still remains a challenging issue due to the Laplace pressure at the micropipette opening. Here, we apply a simple and robust substrate-contacting microinjection method to microdroplet generation, presenting a size-adjustable microdroplets generation method based on a critical injection (CI model. Firstly, the micropipette is adjusted to a preset injection pressure. Secondly, the micropipette is moved down to contact the substrate, then, the Laplace pressure in the droplet is no longer relevant and the liquid flows out in time. The liquid constantly flows out until the micropipette is lifted, ending the substrate-contacting situation, which results in the recovery of the Laplace pressure at the micropipette opening, and the liquid injection is terminated. We carry out five groups of experiments whereupon 1600 images are captured within each group and the microdroplet radius of each image is detected. Then we determine the relationship among microdroplet radius, radius at the micropipette opening, time, and pressure, and, two more experiments are conducted to verify the relationship. To verify the effectiveness of the substrate-contacting method and the relationship, we conducted two experiments with six desired microdroplet radii are set in each experiment, by adjusting the injection time with a given pressure, and adjusting the injection pressure with a given time. Then, six arrays of microdroplets are obtained in each experiment. The results of the experiments show that the standard errors of the microdroplet radii are less than 2% and the experimental errors fall in the range of ±5%. The average operating speed is 20 microdroplets/min and the minimum radius of the microdroplets is 25 μm. This method has a simple experimental setup that enables easy manipulation and lower cost.
Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.
Taracena–Sanz L. F.
Full Text Available This research suggests a contribution in the implementation of forecasting models. The proposed model is developed with the aim to fit the projection of demand to surroundings of firms, and this is based on three considerations that cause that in many cases the forecasts of the demand are different from reality, such as: 1 one of the problems most difficult to model in the forecasts is the uncertainty related to the information available; 2 the methods traditionally used by firms for the projection of demand mainly are based on past behavior of the market (historical demand; and 3 these methods do not consider in their analysis the factors that are influencing so that the observed behaviour occurs. Therefore, the proposed model is based on the implementation of Fuzzy Logic, integrating the main variables that affect the behavior of market demand, and which are not considered in the classical statistical methods. The model was applied to a bottling of carbonated beverages, and with the adjustment of the projection of demand a more reliable forecast was obtained.
Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation
Full Text Available The Relativistic Mean Field (RMF model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs of 58Ni and 208Pb have been found in agreement with the literature values.
The US EPA’s newest tool, the Stormwater Management Model (SWMM) – Climate Adjustment Tool (CAT) is meant to help municipal stormwater utilities better address potential climate change impacts affecting their operations. SWMM, first released in 1971, models hydrology and hydrauli...
Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G
AIMS: Recent European guidelines recommend to include high-density lipoprotein (HDL) cholesterol in risk assessment for primary prevention of cardiovascular disease (CVD), using a SCORE-based risk model (SCORE-HDL). We compared the predictive performance of SCORE-HDL with SCORE in an independent.......8 years of follow-up, 339 individuals died of CVD. In the SCORE target population (age 40-65; n = 30,824), fewer individuals were at baseline categorized as high risk (≥5% 10-year risk of fatal CVD) using SCORE-HDL compared with SCORE (10 vs. 17% in men, 1 vs. 3% in women). SCORE-HDL did not improve...... discrimination of future fatal CVD, compared with SCORE, but decreased the detection rate (sensitivity) of the 5% high-risk threshold from 42 to 26%, yielding a negative net reclassification index (NRI) of -12%. Importantly, using SCORE-HDL, the sensitivity was zero among women. Both SCORE and SCORE...
Full Text Available Objective: In the 1st year of the post-partum period, parenting stress, mental health, and dyadic adjustment are important for the wellbeing of both parents and the child. However, there are few studies that analyze the relationship among these three dimensions. The aim of this study is to investigate the relationships between parenting stress, mental health (depressive and anxiety symptoms, and dyadic adjustment among first-time parents.Method: We studied 268 parents (134 couples of healthy babies. At 12 months post-partum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Post-natal Depression Scale, the State-Trait Anxiety Inventory, and the Dyadic Adjustment Scale. Structural equation modeling was used to analyze the potential mediating effects of mental health on the relationship between parenting stress and dyadic adjustment.Results: Results showed the full mediation effect of mental health between parenting stress and dyadic adjustment. A multi-group analysis further found that the paths did not differ across mothers and fathers.Discussion: The results suggest that mental health is an important dimension that mediates the relationship between parenting stress and dyadic adjustment in the transition to parenthood.
We consider a price adjustment process in a model of monopolistic competition. Firms have incomplete information about the demand structure. When they set a price they observe the amount they can sell at that price and they observe the slope of the true demand curve at that price. With this
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
Simon, K.M.; James, T. S.; Henton, J. A.; Dyke, A. S.
The thickness and equivalent global sea level contribution of an improved model of the central and northern Laurentide Ice Sheet is constrained by 24 relative sea level histories and 18 present-day GPS-measured vertical land motion rates. The final model, termed Laur16, is derived from the ICE-5G
Salwen, Jessica K; O'Leary, K Daniel
Four hundred and fifty-three married or cohabitating couples participated in the current study. A meditational model of men's perpetration of sexual coercion within an intimate relationship was examined based on past theories and known correlates of rape and sexual coercion. The latent constructs of adjustment problems and maladaptive relational style were examined. Adjustment problem variables included perceived stress, perceived low social support, and marital discord. Maladaptive relational style variables included psychological aggression, dominance, and jealousy. Sexual coercion was a combined measure of men's reported perpetration and women's reported victimization. As hypothesized, adjustment problems significantly predicted sexual coercion. Within the meditational model, adjustment problems were significantly correlated with maladaptive relational style, and maladaptive relational style significantly predicted sexual coercion. Once maladaptive relational style was introduced as a mediator, adjustment problems no longer significantly predicted sexual coercion. Implications for treatment, limitations, and future research are discussed.
Full Text Available Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA, which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.
Chiu, Chung-Cheng; Ting, Chih-Chung
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.
Patricia L. Andrews
Rothermel's surface fire spread model was developed to use a value for the wind speed that affects surface fire, called midflame wind speed. Models have been developed to adjust 20-ft wind speed to midflame wind speed for sheltered and unsheltered surface fuel. In this report, Wind Adjustment Factor (WAF) model equations are given, and the BehavePlus fire modeling...
Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.
Willi, Martin L [Dunlap, IL; Fiveland, Scott B [Metamora, IL; Montgomery, David T [Edelstein, IL; Gong, Weidong [Dunlap, IL
A control system for an engine having a cylinder is disclosed having an engine valve configured to affect a fluid flow of the cylinder, an actuator configured to move the engine valve, and an in-cylinder sensor configured to generate a signal indicative of a characteristic of fuel entering the cylinder. The control system also has a controller in communication with the actuator and the sensor. The controller is configured to determine the characteristic of the fuel based on the signal and selectively regulate the actuator to adjust a timing of the engine valve based on the characteristic of the fuel.
Full Text Available Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.
Mao, Xinhua; He, Qing; Li, Hong; Chu, Dongliang
Micro piezoelectric vibration generator has a wide application in the field of microelectronics. Its natural frequency is unchanged after being manufactured. However, resonance cannot occur when the natural frequencies of a piezoelectric generator and the source of vibration frequency are not consistent. Output voltage of the piezoelectric generator will sharply decline. It cannot normally supply power for electronic devices. In order to make the natural frequency of the generator approach the frequency of vibration source, the capacitance FM technology is adopted in this paper. Different capacitance FM schemes are designed by different locations of the adjustment layer. The corresponding capacitance FM models have been established. Characteristic and effect of the capacitance FM have been simulated by the FM model. Experimental results show that the natural frequency of the generator could vary from 46.5 Hz to 42.4 Hz when the bypass capacitance value increases from 0 nF to 30 nF. The natural frequency of a piezoelectric vibration generator could be continuously adjusted by this method.
Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna
The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire to examine theoretically pertinent predictors arranged in a developmental sequence in determining adjustment outcomes. This model...
Deng, Yan; Zhang, Kun; Shen, Xiaoqin; Zhang, Huiyun
Precise parallax detection through definition evaluation and the adjustment of the assembly position of the objective lens or the reticle are important means of eliminating the parallax of the telescope system, so that the imaging screen and the reticle are clearly focused at the same time. An adaptive definition evaluation function based on Susan-Zernike moments is proposed. First, the image is preprocessed by the Susan operator to find the potential boundary edge. Then, the Zernike moments operator is used to determine the exact region of the reticle line with sub-pixel accuracy. The image definition is evaluated only in this related area. The evaluation function consists of the gradient difference calculated by the Zernike moments operator. By adjusting the assembly position of the objective lens, the imaging screen and the reticle will be simultaneously in the state of maximum definition, so the parallax can be eliminated. The experimental results show that the definition evaluation function proposed in this paper has the advantages of good focusing performance, strong anti-interference ability compared with the other commonly used definition evaluation functions.
Medicare and Medicaid Programs; CY 2018 Home Health Prospective Payment System Rate Update and CY 2019 Case-Mix Adjustment Methodology Refinements; Home Health Value-Based Purchasing Model; and Home Health Quality Reporting Requirements. Final rule.
This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.
Full Text Available Abstract Background Volunteering participants in disease studies tend to be healthier than the general population partially due to specific enrollment criteria. Using modeling to accurately predict outcomes of cohort studies enrolling volunteers requires adjusting for the bias introduced in this way. Here we propose a new method to account for the effect of a specific form of healthy volunteer bias resulting from imposing disease status-related eligibility criteria, on disease-specific mortality, by explicitly modeling the length of the time interval between the moment when the subject becomes ineligible for the study, and the outcome. Methods Using survival time data from 1190 newly diagnosed lung cancer patients at MD Anderson Cancer Center, we model the time from clinical lung cancer diagnosis to death using an exponential distribution to approximate the length of this interval for a study where lung cancer death serves as the outcome. Incorporating this interval into our previously developed lung cancer risk model, we adjust for the effect of disease status-related eligibility criteria in predicting the number of lung cancer deaths in the control arm of CARET. The effect of the adjustment using the MD Anderson-derived approximation is compared to that based on SEER data. Results Using the adjustment developed in conjunction with our existing lung cancer model, we are able to accurately predict the number of lung cancer deaths observed in the control arm of CARET. Conclusions The resulting adjustment was accurate in predicting the lower rates of disease observed in the early years while still maintaining reasonable prediction ability in the later years of the trial. This method could be used to adjust for, or predict the duration and relative effect of any possible biases related to disease-specific eligibility criteria in modeling studies of volunteer-based cohorts.
E V. Zelentsova
Full Text Available The paper deals with matters related to the adjustment training of the specialist who has specified competences in order to solve the specific staffing problems of enterprises, provide scheduled staff re-training, suggest additional specializations to graduates of educational institutions, etc. Defines a structure and content of the management system elements for training and adjustment training of specialists, including a shift from activity-related competence to desirable areas of knowledge, and therefrom to the curricula and programmes.A concept of the knowledge subject is defined. It allows perceiving a specific subject area. A concept of the aspects of the certain knowledge subject is proposed, and in this case it is assumed that the knowledge subject can be represented as a finite set of aspects of the given subject of knowledge. A concept of the knowledge section is defined as a body of knowledge, reflecting some aspect of the particular subject of knowledge. Thus, the entire body of knowledge to be offered by the particular Department may be represented by a variety of specific sections of knowledge. For each section of knowledge is introduced a concept of the training level, which corresponds to the level of detail and quality of the educational material delivered. To assess the level of training it is possible to use the method of expert’s estimates.The paper suggests a scheme that allows realization of the relevant order for target training of specialists. Requirements for training specialists are represented as a competency model of the specialist; an implemented study programme is characterized by the competences, obtained as the learning outcomes.The content of knowledge, corresponding to some competence, is represented as a total of all the sections of knowledge related to the aspects generated by the competence. Then the competence can be revealed through the body of knowledge. As a result, both an order for adjustment training
Warm and emotionally close relationships with parents and grandparents have been found in previous studies to be linked with better adolescent adjustment. The present study, informed by Family Systems Theory and Intergenerational Solidarity Theory, uses a moderated mediation model analyzing the contribution of the dynamics of these intergenerational relationships to adolescent adjustment. Specifically, it examines the mediating role of emotional closeness to the closest grandparent in the relationship between emotional closeness to a parent (the offspring of the closest grandparent) and adolescent adjustment difficulties. The model also examines the moderating role of emotional closeness to parents in the relationship between emotional closeness to grandparents and adjustment difficulties. The study was based on a sample of 1,405 Jewish Israeli secondary school students (ages 12-18) who completed a structured questionnaire. It was found that emotional closeness to the closest grandparent was more strongly associated with reduced adjustment difficulties among adolescents with higher levels of emotional closeness to their parents. In addition, adolescent adjustment and emotional closeness to parents was partially mediated by emotional closeness to grandparents. Examining the family conditions under which adolescents' relationships with grandparents is stronger and more beneficial for them can help elucidate variations in grandparent-grandchild ties and expand our understanding of the mechanisms that shape child outcomes. (c) 2015 APA, all rights reserved).
Kim, Hye Young; So, Hyang Sook
This study was done to propose a structural model to explain and predict psychosocial adjustment in patients with early breast cancer and to test the model. The model was based on the Stress-Coping Model of Lazarus and Folkman (1984). Data were collected from February 18 to March 18, 2009. For data analysis, 198 data sets were analyzed using SPSS/WIN12 and AMOS 7.0 version. Social support, uncertainty, symptom experience, and coping had statistically significant direct, indirect and total effects on psychosocial adjustment, and optimism had significant indirect and total effects on psychosocial adjustment. These variables explained 57% of total variance of the psychosocial adjustment in patients with early breast cancer. The results of the study indicate a need to enhance psychosocial adjustment of patients with early breast cancer by providing detailed structured information and various symptom alleviation programs to reduce perceived stresses such as uncertainty and symptom experience. They also suggest the need to establish support systems through participation of medical personnel and families in such programs, and to apply interventions strengthening coping methods to give the patients positive and optimistic beliefs.
Full Text Available The process of super resolution image reconstruction is such a process that multiple observations are taken on the same target to obtain low resolution images, then the low resolution images are used to reconstruct the real image of the target, namely high resolution image. This process is similar to that in the field of surveying and mapping, in which the same target is observed repeatedly and the optimal values is calculated with surveying adjustment methods. In this paper, the method of surveying adjustment is applied into super resolution image reconstruction. A integral nonlinear adjustment model for super resolution image reconstruction is proposed at first. And then the model is parameterized with a quadratic function. Finally the model is solved with the least squares adjustment method. Based on the proposed adjustment method, the specific strategy of image reconstruction is presented. This method for super resolution image reconstruction can make quantitative analysis of the results, and avoid successfully ill-condition problem, etc. The results show that, compared to the traditional method of super resolution image reconstruction, this method has greatly improved the visual effects, and the PSNR and SSIM has also greatly improved, so the method is reliable and feasible.
Zimmerman, Tammy M.; Breen, Kevin J.
Pesticide concentration data for waters from selected carbonate-rock aquifers in agricultural areas of Pennsylvania were collected in 1993–2009 for occurrence and distribution assessments. A set of 30 wells was visited once in 1993–1995 and again in 2008–2009 to assess concentration changes. The data include censored matched pairs (nondetections of a compound in one or both samples of a pair). A potentially improved approach for assessing concentration changes is presented where (i) concentrations are adjusted with models of matrix-spike recovery and (ii) area-wide temporal change is tested by use of the paired Prentice-Wilcoxon (PPW) statistical test. The PPW results for atrazine, simazine, metolachlor, prometon, and an atrazine degradate, deethylatrazine (DEA), are compared using recovery-adjusted and unadjusted concentrations. Results for adjusted compared with unadjusted concentrations in 2008–2009 compared with 1993–1995 were similar for atrazine and simazine (significant decrease; 95% confidence level) and metolachlor (no change) but differed for DEA (adjusted, decrease; unadjusted, increase) and prometon (adjusted, decrease; unadjusted, no change). The PPW results were different on recovery-adjusted compared with unadjusted concentrations. Not accounting for variability in recovery can mask a true change, misidentify a change when no true change exists, or assign a direction opposite of the true change in concentration that resulted from matrix influences on extraction and laboratory method performance. However, matrix-based models of recovery derived from a laboratory performance dataset from multiple studies for national assessment, as used herein, rather than time- and study-specific recoveries may introduce uncertainty in recovery adjustments for individual samples that should be considered in assessing change.
Ion Gh. Rosca
The analysis part of the model is based on the study of the equilibrium to the continuous case with some interpretations of the discreet one, by using the state diagram. The optimization problem at the economic level is also used; it is built up of a specified number of representative consumers and firms in order to reveal the interaction between these elements.
Cummings, E Mark; Merrilees, Christine E; Schermerhorn, Alice C; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
Relations between political violence and child adjustment are matters of international concern. Past research demonstrates the significance of community, family, and child psychological processes in child adjustment, supporting study of interrelations between multiple social ecological factors and child adjustment in contexts of political violence. Testing a social ecological model, 300 mothers and their children (M = 12.28 years, SD = 1.77) from Catholic and Protestant working class neighborhoods in Belfast, Northern Ireland, completed measures of community discord, family relations, and children's regulatory processes (i.e., emotional security) and outcomes. Historical political violence in neighborhoods based on objective records (i.e., politically motivated deaths) were related to family members' reports of current sectarian antisocial behavior and nonsectarian antisocial behavior. Interparental conflict and parental monitoring and children's emotional security about both the community and family contributed to explanatory pathways for relations between sectarian antisocial behavior in communities and children's adjustment problems. The discussion evaluates support for social ecological models for relations between political violence and child adjustment and its implications for understanding relations in other parts of the world.
Chavoshi, Saeid; Wintre, Maxine Gallander; Dentakos, Stella; Wright, Lorna
The current study proposes a Developmental Sequence Model to University Adjustment and uses a multifaceted measure, including academic, social and psychological adjustment, to examine factors predictive of undergraduate international student adjustment. A hierarchic regression model is carried out on the Student Adaptation to College Questionnaire…
Peek, Lori; Morrissey, Bridget; Marlatt, Holly
The authors explored individual and family adjustment processes among parents (n = 30) and children (n = 55) who were displaced to Colorado after Hurricane Katrina. Drawing on in-depth interviews with 23 families, this article offers an inductive model of displaced family adjustment. Four stages of family adjustment are presented in the model: (a)…
Yong-jun Chen; Ji-an Yu; Lei-shan Zhou; Qing Tao
It is a crucial and difficult problem in railway transportation dispatch mechanism to automatically compile train operation adjustment (TOA) plan with computer to ensure safe, fast, and punctual running of trains. Based on the proposed model of TOA under the conditions of railway network (RN), we take minimum travel time of train as objective function of optimization, and after fast preliminary evaluation calculation on it, we introduce the theory and method of ordinal optimization (OO) to so...
Qu, Yongming; Luo, Junxiang
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Wang, Zhixun; Xie, Zhicheng; Liu, Chang
resistances is designed. The mathematical model for global optimal switching of CBDs is established by field-circuit coupling method with the equivalent resistance network of ac system along with the location of substations and ground electrodes. The optimal switching scheme to minimize the global maximum dc...... current is obtained by gravitational search algorithm. Based on the aforementioned work, we propose a suppression strategy considering electro-corrosion of metal pipelines. The effectiveness and superiority of suppression methods are verified by comparative case studies of the Yichang power grid....
ICs, to design a user friendly charger circuit that manually adjusts the voltage depending on the voltage of the battery,. (6volts, 9volts and 12volts ... Keywords - Burst Charge, Depth-Of-Discharge DOD, Pulse Charge, State-Of-Charge SOC, Trickle charge, Duty Cycle. 1. ..... International Journal of Research Engineering and.
Roč. 26, č. 5 (2002), s. 837-850 ISSN 0165-1889 R&D Projects: GA AV ČR KSK9058117 Institutional research plan: CEZ:AV0Z7085904 Keywords : adjustment costs * capital mobility * convergence * human capital Subject RIV: AH - Economics Impact factor: 0.738, year: 2002
Full Text Available This paper evaluates the use of precipitation forecasts from a numerical weather prediction (NWP model for near-real-time satellite precipitation adjustment based on 81 flood-inducing heavy precipitation events in seven mountainous regions over the conterminous United States. The study is facilitated by the National Center for Atmospheric Research (NCAR real-time ensemble forecasts (called model, the Integrated Multi-satellitE Retrievals for GPM (IMERG near-real-time precipitation product (called raw IMERG and the Stage IV multi-radar/multi-sensor precipitation product (called Stage IV used as a reference. We evaluated four precipitation datasets (the model forecasts, raw IMERG, gauge-adjusted IMERG and model-adjusted IMERG through comparisons against Stage IV at six-hourly and event length scales. The raw IMERG product consistently underestimated heavy precipitation in all study regions, while the domain average rainfall magnitudes exhibited by the model were fairly accurate. The model exhibited error in the locations of intense precipitation over inland regions, however, while the IMERG product generally showed correct spatial precipitation patterns. Overall, the model-adjusted IMERG product performed best over inland regions by taking advantage of the more accurate rainfall magnitude from NWP and the spatial distribution from IMERG. In coastal regions, although model-based adjustment effectively improved the performance of the raw IMERG product, the model forecast performed even better. The IMERG product could benefit from gauge-based adjustment, as well, but the improvement from model-based adjustment was consistently more significant.
Yue, Xijuan; Zhao, Yinghui; Han, Chunming; Dou, Changyong
High-precision surface elevation information in large scale can be obtained efficiently by airborne Interferomatric Synthetic Aperture Radar (InSAR) system, which is recently becoming an important tool to acquire remote sensing data and perform mapping applications in the area where surveying and mapping is difficult to be accomplished by spaceborne satellite or field working. . Based on the study of the three-dimensional (3D) positioning model using interferogram phase and Position and Orientation System (POS) data and block adjustment error model, a block adjustment method to produce seamless wide-area mosaic product generated from airborne InSAR data is proposed in this paper. The effect of 6 parameters, including trajectory and attitude of the aircraft, baseline length and incline angle, slant range, and interferometric phase, on the 3D positioning accuracy is quantitatively analyzed. Using the data acquired in the field campaign conducted in Mianyang county Sichuan province, China in June 2011, a mosaic seamless Digital Elevation Model (DEM) product was generated from 76 images in 4 flight strips by the proposed block adjustment model. The residuals of ground control points (GCPs), the absolute positioning accuracy of check points (CPs) and the relative positioning accuracy of tie points (TPs) both in same and adjacent strips were assessed. The experimental results suggest that the DEM and Digital Orthophoto Map (DOM) product generated by the airborne InSAR data with sparse GCPs can meet mapping accuracy requirement at scale of 1:10 000.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Full Text Available This study describes a new approach based on fuzzy algorithm to suppress the current harmonic contents in the output of an inverter. Inverter system using fuzzy controllers provide ride-through capability during voltage sags, reduces harmonics, improves power factor and high reliability, less electromagnetic interference noise, low common mode noise and extends output voltage range. A feasible test is implemented by building a model of three-phase impedance source inverter, which is designed and controlled on the basis of proposed considerations. It is verified from the practical point of view that these new approaches are more effective and acceptable to minimize the harmonic distortion and improves the quality of power. Due to the complex algorithm, their realization often calls for a compromise between cost and performance. The proposed optimizing strategies may be applied in variable-frequency dc-ac inverters, UPSs, and ac drives.
Fang Zheng; Qiu Guanzhou
A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature
Gwinn, Allen Fort, Jr.
Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged
Shankar, Shrikanth; Karypis, George
.... Similarity based categorization algorithms such as k-nearest neighbor, generalized instance set and centroid based classification have been shown to be very effective in document categorization...
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas
National Aeronautics and Space Administration — This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax OpticStudio ®. The...
Elizur, Y; Ziv, M
While heterosexist family undermining has been demonstrated to be a developmental risk factor in the life of persons with same-gender orientation, the issue of protective family factors is both controversial and relatively neglected. In this study of Israeli gay males (N = 114), we focused on the interrelations of family support, family acceptance and family knowledge of gay orientation, and gay male identity formation, and their effects on mental health and self-esteem. A path model was proposed based on the hypotheses that family support, family acceptance, family knowledge, and gay identity formation have an impact on psychological adjustment, and that family support has an effect on gay identity formation that is mediated by family acceptance. The assessment of gay identity formation was based on an established stage model that was streamlined for cross-cultural practice by defining three basic processes of same-gender identity formation: self-definition, self-acceptance, and disclosure (Elizur & Mintzer, 2001). The testing of our conceptual path model demonstrated an excellent fit with the data. An alternative model that hypothesized effects of gay male identity on family acceptance and family knowledge did not fit the data. Interpreting these results, we propose that the main effect of family support/acceptance on gay identity is related to the process of disclosure, and that both general family support and family acceptance of same-gender orientation play a significant role in the psychological adjustment of gay men.
Meer, van der P.J.; Jorritsma, I.T.M.; Kramer, K.
The sensitivity of forest development to climate change is assessed using a gap model. Process descriptions in the gap model of growth, phenology, and seed production were adjusted for climate change effects using a detailed process-based growth modeland a regression analysis. Simulation runs over
Eiden, Rina Das; And Others
Examined the connection between maternal working models, marital adjustment, and parent-child relationship among 45 mothers and their 16- to 62-month-old children. Found that maternal working models were related to the quality of mother-child interactions and child security, as well as a significant relationship between marital adjustment and…
Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming
In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.
Full Text Available In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs, which are measured by the integrated positioning and orientation system (POS of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.
Kollo, Karin; Spada, Giorgio; Vermeer, Martin
Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every
Tests of the neutral evolution hypothesis are usually built on the standard model which assumes that mutations are neutral and the population size remains constant over time. However, it is unclear how such tests are affected if the last assumption is dropped. Here, we extend the unifying framework for tests based on the site frequency spectrum, introduced by Achaz and Ferretti, to populations of varying size. Key ingredients are the first two moments of the site frequency spectrum. We show how these moments can be computed analytically if a population has experienced two instantaneous size changes in the past. We apply our method to data from ten human populations gathered in the 1000 genomes project, estimate their demographies and define demography-adjusted versions of Tajima\\'s D, Fay & Wu\\'s H, and Zeng\\'s E. Our results show that demography-adjusted test statistics facilitate the direct comparison between populations and that most of the differences among populations seen in the original unadjusted tests can be explained by their underlying demographies. Upon carrying out whole-genome screens for deviations from neutrality, we identify candidate regions of recent positive selection. We provide track files with values of the adjusted and unadjusted tests for upload to the UCSC genome browser. © 2014 Elsevier Inc.
Gao, Feng; Feng, Wei; Dai, Wei-Bing; Yi, Wang-Min; Liu, Guang-Tong; Zheng, Sheng-Yu
In this paper, the design principles of adjusting parallel mechanisms is introduced, including mechanical subsystem, control sub-system and software sub-system. According to the design principles, key technologies for system of adjusting parallel mechanisms are analyzed. Finally, design specifications for system of adjusting parallel mechanisms are proposed based on requirement of spacecraft integration and it can apply to cabin docking, solar array panel docking and camera docking.
Bastos, Leonardo Soares; Oliveira, Raquel de Vasconcellos Carvalhaes de; Velasque, Luciane de Souza
In the last decades, the use of the epidemiological prevalence ratio (PR) instead of the odds ratio has been debated as a measure of association in cross-sectional studies. This article addresses the main difficulties in the use of statistical models for the calculation of PR: convergence problems, availability of tools and inappropriate assumptions. We implement the direct approach to estimate the PR from binary regression models based on two methods proposed by Wilcosky & Chambless and compare with different methods. We used three examples and compared the crude and adjusted estimate of PR, with the estimates obtained by use of log-binomial, Poisson regression and the prevalence odds ratio (POR). PRs obtained from the direct approach resulted in values close enough to those obtained by log-binomial and Poisson, while the POR overestimated the PR. The model implemented here showed the following advantages: no numerical instability; assumes adequate probability distribution and, is available through the R statistical package.
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: email@example.com, E-mail: firstname.lastname@example.org, E-mail: email@example.com [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Schaefer, Earl S.; Edgerton, Marianna
The goal of this study is to broaden the scope of a conceptual model for child behavior by analyzing constructs relevant to cognition, conation, and affect. Two samples were drawn from school populations. For the first sample, 28 teachers from 8 rural, suburban, and urban schools rated 193 kindergarten children. Each teacher rated up to eight…
Ali P. Yunus
Full Text Available Sea-level rise (SLR from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC fifth assessment report (AR5 and UK climatic projections 2009 (UKCP09 using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.
IRB exemption and survey development and deployment. Based on feedback from Fort Drum, we created an internet -based version of the survey, to...Caucasian, 3% as African American, and 5% as Latina , 3% as Asian American, 3% as American Indian, and 6% as other. As we targeted spouses of children aged...of technology, such as the cotton gin or the automobil.e,’ on life in America 18. Knows how to use maps and globes to locate and gain 1 2 3 DK
Full Text Available Considerable advances in robotic actuation technology have been made in recent years. Particularly the use of compliance has increased, both as series elastic elements as well as in parallel to the main actuation drives. This work focuses on the model formulation and control of compliant actuation structures including multiple branches and multiarticulation, and significantly contributes by proposing an elegant modular formulation that describes the energy exchange between the compliant elements and articulated multibody robot dynamics using the concept of power flows, and a single matrix that describes the entire actuation topology. Using this formulation, a novel gradient descent based control law is derived for torque control of compliant actuation structures with adjustable pretension, with proven convexity for arbitrary actuation topologies. Extensions toward handling unidirectionality of elastic elements and joint motion compensation are also presented. A simulation study is performed on a 3-DoF leg model, where series-elastic main drives are augmented by parallel elastic tendons with adjustable pretension. Two actuation topologies are considered, one of which includes a biarticulated tendon. The data demonstrate the effectiveness of the proposed modeling and control methods. Furthermore, it is shown the biarticulated topology provides significant benefits over the monoarticulated arrangement.
Full Text Available Due to the lack of ground control points (GCPs and parameters of satellite orbits, as well as the interior and exterior orientation parameters of cameras in historical declassified intelligence satellite photography (DISP imagery, a second order polynomial equation-based block adjustment model is proposed for orthorectification of DISP imagery. With the proposed model, 355 DISP images from four missions and five orbits are orthorectified, with an approximate accuracy of 2.0–3.0 m. The 355 orthorectified images are assembled into a seamless, full-coverage mosaic image map of the karst area of Guangxi, China. The accuracy of the mosaicked image map is within 2.0–4.0 m when compared to 78 checkpoints measured by Real–Time Kinematic (RTK GPS surveys. The assembled image map will be delivered to the Guangxi Geological Library and released to the public domain and the research community.
Sagebrush (Artemisia spp.) steppe ecosystems have experienced recent changes resulting not only in the loss of habitat but also fragmentation and degradation of remaining habitats. As a result, sagebrush-obligate and sagebrush associated songbird populations have experienced population declines over the past several decades. We examined landscape-scale responses in occupancy and abundance for six focal songbird species at 318 survey sites across the Wyoming Basins Ecoregional Assessment (WBEA) area. Occupancy and abundance models were fit for each species using datasets developed at multiple moving window extents to assess landscape-scale relationships between abiotic, habitat, and anthropogenic factors. Anthropogenic factors had less influence on species occupancy or abundance than abiotic and habitat factors. Sagebrush measures were strong predictors of occurrence for sagebrush-obligate species, such as Brewer’s sparrows (Spizella breweri), sage sparrows (Amphispiza belli) and sage thrashers (Oreoscoptes montanus), as well as green-tailed towhees (Pipilo chlorurus), a species associated with mountain shrub communities. Occurrence for lark sparrows (Chondestes grammacus) and vesper sparrows (Pooecetes gramineus), considered shrub steppe-associated species, was also related to big sagebrush communities, but at large spatial extents. Although relationships between anthropogenic variables and occurrence were weak for most species, the consistent relationship with sagebrush habitat variables suggests direct habitat loss and not edge or additional fragmentation effects are causing declines in the avifauna examined in the WBEA area. Thus, natural and anthropogenic disturbances that result in loss of critical habitats are the biggest threats to these species. We applied our models spatially across the WBEA area to identify and prioritize key areas for conservation.
Kalkan, Melek; Ersanli, Ercumend
The aim of this study is to investigate the effects of the marriage enrichment program based on the cognitive-behavioral approach on levels of marital adjustment of individuals. The experimental and control group of this research was totally composed of 30 individuals. A pre-test post-test research model with control group was used in this…
Conclusion: It is necessary to adjust the cut-off values of IHC-based prognostic models to fit the purpose. If the estimated risk is clearly high or low, it may be reasonable to omit multigene assays when cost is a consideration.
Full Text Available This paper presents an approach for adjusting Felder-Silverman learning styles model for application in development of adaptive e-learning systems. Main goal of the paper is to improve the existing e-learning courses by developing a method for adaptation based on learning styles. The proposed method includes analysis of data related to students characteristics and applying the concept of personalization in creating e-learning courses. The research has been conducted at Faculty of organizational sciences, University of Belgrade, during winter semester of 2009/10, on sample of 318 students. The students from the experimental group were divided in three clusters, based on data about their styles identified using adjusted Felder-Silverman questionnaire. Data about learning styles collected during the research were used to determine typical groups of students and then to classify students into these groups. The classification was performed using data mining techniques. Adaptation of the e-learning courses was implemented according to results of data analysis. Evaluation showed that there was statistically significant difference in the results of students who attended the course adapted by using the described method, in comparison with results of students who attended course that was not adapted.
Zhang, X.; Anagnostou, E. N.; Astitha, M.; Vergara, H. J.; Gourley, J. J.; Hong, Y.
This study aims to investigate the use of high-resolution Numerical Weather Prediction (NWP) for evaluating biases of satellite rainfall estimates of flood-inducing storms in mountainous areas and associated improvements in flood modeling. Satellite-retrieved precipitation has been considered as a feasible data source for global-scale flood modeling, given that satellite has the spatial coverage advantage over in situ (rain gauges and radar) observations particularly over mountainous areas. However, orographically induced heavy precipitation events tend to be underestimated and spatially smoothed by satellite products, which error propagates non-linearly in flood simulations.We apply a recently developed retrieval error and resolution effect correction method (Zhang et al. 2013*) on the NOAA Climate Prediction Center morphing technique (CMORPH) product based on NWP analysis (or forecasting in the case of real-time satellite products). The NWP rainfall is derived from the Weather Research and Forecasting Model (WRF) set up with high spatial resolution (1-2 km) and explicit treatment of precipitation microphysics.In this study we will show results on NWP-adjusted CMORPH rain rates based on tropical cyclones and a convective precipitation event measured during NASA's IPHEX experiment in the South Appalachian region. We will use hydrologic simulations over different basins in the region to evaluate propagation of bias correction in flood simulations. We show that the adjustment reduced the underestimation of high rain rates thus moderating the strong rainfall magnitude dependence of CMORPH rainfall bias, which results in significant improvement in flood peak simulations. Further study over Blue Nile Basin (western Ethiopia) will be investigated and included in the presentation. *Zhang, X. et al. 2013: Using NWP Simulations in Satellite Rainfall Estimation of Heavy Precipitation Events over Mountainous Areas. J. Hydrometeor, 14, 1844-1858.
Chung, Eui-Seok; Soden, Brian J.
Intermodel compensation between cloud feedback and rapid cloud adjustment has important implications for the range of model-inferred climate sensitivity. Although this negative intermodel correlation exists in both realistic (e.g., coupled ocean-atmosphere models) and idealized (e.g., aqua-planet) model configurations, the compensation appears to be stronger in the latter. The cause of the compensation between feedback and adjustment, and its dependence on model configuration remain poorly understood. In this study, we examine the characteristics of the cloud feedback and adjustment in model simulations with differing complexity, and analyze the causes responsible for their compensation. We show that in all model configurations, the intermodel compensation between cloud feedback and cloud adjustment largely results from offsetting changes in marine boundary-layer clouds. The greater prevalence of these cloud types in aqua-planet models is a likely contributor to the larger correlation between feedback and adjustment in those configurations. It is also shown that differing circulation changes in the aqua-planet configuration of some models act to amplify the intermodel range and sensitivity of the cloud radiative response by about a factor of 2.
Full Text Available This study compared the effects of a proportionate spray volume (PSV adjustment model and a fixed model (300 L/ha on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont. de Bary (PLB and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop.
Ratajkiewicz, H.; Kierzek, R.; Raczkowski, M.; Hołodyńska-Kulas, A.; Łacka, A.; Wójtowicz, A.; Wachowiak, M.
This study compared the effects of a proportionate spray volume (PSV) adjustment model and a fixed model (300 L/ha) on the infestation of processing tomato with potato late blight (Phytophthora infestans (Mont.) de Bary) (PLB) and azoxystrobin and chlorothalonil residues in fruits in three consecutive seasons. The fungicides were applied in alternating system with or without two spreader adjuvants. The proportionate spray volume adjustment model was based on the number of leaves on plants and spray volume index. The modified Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method was optimized and validated for extraction of azoxystrobin and chlorothalonil residue. Gas chromatography with a nitrogen and phosphorus detector and an electron capture detector were used for the analysis of fungicides. The results showed that higher fungicidal residues were connected with lower infestation of tomato with PLB. PSV adjustment model resulted in lower infestation of tomato than the fixed model (300 L/ha) when fungicides were applied at half the dose without adjuvants. Higher expected spray interception into the tomato canopy with the PSV system was recognized as the reasons of better control of PLB. The spreader adjuvants did not have positive effect on the biological efficacy of spray volume application systems. The results suggest that PSV adjustment model can be used to determine the spray volume for fungicide application for processing tomato crop. (Author)
Weisbin, C.R.; Marable, J.H.; Collins, P.J.; Cowan, C.L.; Peelle, R.W.; Salvatores, M.
The present work proposes a specific plan of cross section library adjustment for fast reactor core physics analysis using information from fast reactor and dosimetry integral experiments and from differential data evaluations. This detailed exposition of the proposed approach is intended mainly to elicit review and criticism from scientists and engineers in the research, development, and design fields. This major attempt to develop useful adjusted libraries is based on the established benchmark integral data, accurate and well documented analysis techniques, sensitivities, and quantified uncertainties for nuclear data, integral experiment measurements, and calculational methodology. The adjustments to be obtained using these specifications are intended to produce an overall improvement in the least-squares sense in the quality of the data libraries, so that calculations of other similar systems using the adjusted data base with any credible method will produce results without much data-related bias. The adjustments obtained should provide specific recommendations to the data evaluation program to be weighed in the light of newer measurements, and also a vehicle for observing how the evaluation process is converging. This report specifies the calculational methodology to be used, the integral experiments to be employed initially, and the methods and integral experiment biases and uncertainties to be used. The sources of sensitivity coefficients, as well as the cross sections to be adjusted, are detailed. The formulae for sensitivity coefficients for fission spectral parameters are developed. A mathematical formulation of the least-square adjustment problem is given including biases and uncertainties in methods
Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research
Full Text Available The aim of the present study was to empirically test an explanatory model of psychosocial adjustment during adolescence, with psychosocial adjustment during this stage being understood as a combination of school adjustment (or school engagement and subjective well-being. According to the hypothetic model, psychosocial adjustment depends on self-concept and resilience, which in turn act as mediators of the influence of perceived social support (from family, peers and teachers on this adjustment. Participants were 1250 secondary school students (638 girls and 612 boys aged between 12 and 15 years (Mean = 13.72; SD = 1.09. The results provided evidence of: (a the influence of all three types of perceived support on subject resilience and self-concept, with perceived family support being particularly important in this respect; (b the influence of the support received from teachers on school adjustment and support received from the family on psychological wellbeing; and (c the absence of any direct influence of peer support on psychosocial adjustment, although indirect influence was observed through the psychological variables studied. These results are discussed from an educational perspective and in terms of future research.
In this paper, we develop a new unit root testing procedure which considers jointly for structural breaks and nonlinear adjustment. The structural breaks are modelled by means of a logistic smooth transition function and nonlinear adjustment is modelled by means of an ESTAR model. The empirical size of test is quite close to the nominal one and in terms of power; the new unit root test is generally superior to the alternative test. The new unit root test presents good size properties and does...
Full Text Available In order to improve the surface ozone forecast over Beijing and surrounding regions, data assimilation method integrated into a high-resolution regional air quality model and a regional air quality monitoring network are employed. Several advanced data assimilation strategies based on ensemble Kalman filter are designed to adjust O3 initial conditions, NOx initial conditions and emissions, VOCs initial conditions and emissions separately or jointly through assimilating ozone observations. As a result, adjusting precursor initial conditions demonstrates potential improvement of the 1-h ozone forecast almost as great as shown by adjusting precursor emissions. Nevertheless, either adjusting precursor initial conditions or emissions show deficiency in improving the short-term ozone forecast at suburban areas. Adjusting ozone initial values brings significant improvement to the 1-h ozone forecast, and its limitations lie in the difficulty in improving the 1-h forecast at some urban site. A simultaneous adjustment of the above five variables is found to be able to reduce these limitations and display an overall better performance in improving both the 1-h and 24-h ozone forecast over these areas. The root mean square errors of 1-h ozone forecast at urban sites and suburban sites decrease by 51% and 58% respectively compared with those in free run. Through these experiments, we found that assimilating local ozone observations is determinant for ozone forecast over the observational area, while assimilating remote ozone observations could reduce the uncertainty in regional transport ozone.
Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.
Eimontas, Jonas; Rimsaite, Zivile; Gegieckaite, Goda; Zelviene, Paulina; Kazlauskas, Evaldas
Adjustment disorder is one of the most diagnosed mental disorders. However, there is a lack of studies of specialized internet-based psychosocial interventions for adjustment disorder. We aimed to analyze the outcomes of an internet-based unguided self-help psychosocial intervention BADI for adjustment disorder in a two armed randomized controlled trial with a waiting list control group. In total 284 adult participants were randomized in this study. We measured adjustment disorder as a primary outcome, and psychological well-being as a secondary outcome at pre-intervention (T1) and one month after the intervention (T2). We found medium effect size of the intervention for the completer sample on adjustment disorder symptoms. Intervention was effective for those participants who used it at least one time in 30-day period. Our results revealed the potential of unguided internet-based self-help intervention for adjustment disorder. However, high dropout rates in the study limits the generalization of the outcomes of the intervention only to completers.
Nicassio, Perry M.
Summarizes clinical and research literature on Southeast Asian refugees' adjustment in the United States and proposes the adoption of theoretical models that may help explain individual differences. Reports that acculturation, learned helplessness, and stress management models appear to aid the conceptualizing of refugee problems and provide a…
Kaplan, Robert M.; Anderson, John P.; Kaplan, Cameron M.
Purpose: To describe the development of a model for estimating the effects of tobacco use upon Quality Adjusted Life Years (QALYs) and to estimate the impact of tobacco use on health outcomes for the United States (US) population using the model. Method: We obtained estimates of tobacco consumption from 6 years of the National Health Interview…
Grbac, Zorana; Scherer, Matthias; Zagst, Rudi
This book presents 20 peer-reviewed chapters on current aspects of derivatives markets and derivative pricing. The contributions, written by leading researchers in the field as well as experienced authors from the financial industry, present the state of the art in: • Modeling counterparty credit risk: credit valuation adjustment, debit valuation adjustment, funding valuation adjustment, and wrong way risk. • Pricing and hedging in fixed-income markets and multi-curve interest-rate modeling. • Recent developments concerning contingent convertible bonds, the measuring of basis spreads, and the modeling of implied correlations. The recent financial crisis has cast tremendous doubts on the classical view on derivative pricing. Now, counterparty credit risk and liquidity issues are integral aspects of a prudent valuation procedure and the reference interest rates are represented by a multitude of curves according to their different periods and maturities. A panel discussion included in the book (featuring D...
Barlow, Jane; Bergman, Hanna; Kornør, Hege; Wei, Yinghui; Bennett, Cathy
Emotional and behavioural problems in children are common. Research suggests that parenting has an important role to play in helping children to become well-adjusted, and that the first few months and years are especially important. Parenting programmes may have a role to play in improving the emotional and behavioural adjustment of infants and toddlers, and this review examined their effectiveness with parents and carers of young children. 1. To establish whether group-based parenting programmes are effective in improving the emotional and behavioural adjustment of young children (maximum mean age of three years and 11 months); and2. To assess whether parenting programmes are effective in the primary prevention of emotional and behavioural problems. In July 2015 we searched CENTRAL (the Cochrane Library), Ovid MEDLINE, Embase (Ovid), and 10 other databases. We also searched two trial registers and handsearched reference lists of included studies and relevant systematic reviews. Two reviewers independently assessed the records retrieved by the search. We included randomised controlled trials (RCTs) and quasi-RCTs of group-based parenting programmes that had used at least one standardised instrument to measure emotional and behavioural adjustment in children. One reviewer extracted data and a second reviewer checked the extracted data. We presented the results for each outcome in each study as standardised mean differences (SMDs) with 95% confidence intervals (CIs). Where appropriate, we combined the results in a meta-analysis using a random-effects model. We used the GRADE (Grades of Recommendations, Assessment, Development, and Evaluation) approach to assess the overall quality of the body of evidence for each outcome. We identified 22 RCTs and two quasi-RCTs evaluating the effectiveness of group-based parenting programmes in improving the emotional and behavioural adjustment of children aged up to three years and 11 months (maximum mean age three years 11 months
Deutsch, Anne; Pardasaney, Poonam; Iriondo-Perez, Jeniffer; Ingber, Melvin J; Porter, Kristie A; McMullen, Tara
Functional status measures are important patient-centered indicators of inpatient rehabilitation facility (IRF) quality of care. We developed a risk-adjusted self-care functional status measure for the IRF Quality Reporting Program. This paper describes the development and performance of the measure's risk-adjustment model. Our sample included IRF Medicare fee-for-service patients from the Centers for Medicare & Medicaid Services' 2008-2010 Post-Acute Care Payment Reform Demonstration. Data sources included the Continuity Assessment Record and Evaluation Item Set, IRF-Patient Assessment Instrument, and Medicare claims. Self-care scores were based on 7 Continuity Assessment Record and Evaluation items. The model was developed using discharge self-care score as the dependent variable, and generalized linear modeling with generalized estimation equation to account for patient characteristics and clustering within IRFs. Patient demographics, clinical characteristics at IRF admission, and clinical characteristics related to the recent hospitalization were tested as risk adjusters. A total of 4769 patient stays from 38 IRFs were included. Approximately 57% of the sample was female; 38.4%, 75-84 years; and 31.0%, 65-74 years. The final model, containing 77 risk adjusters, explained 53.7% of variance in discharge self-care scores (Pcare function was the strongest predictor, followed by admission cognitive function and IRF primary diagnosis group. The range of expected and observed scores overlapped very well, with little bias across the range of predicted self-care functioning. Our risk-adjustment model demonstrated strong validity for predicting discharge self-care scores. Although the model needs validation with national data, it represents an important first step in evaluation of IRF functional outcomes.
Yang, Junhua; Li, Yong; Cheng, Wei; Liu, Yang; Liu, Chenxi
Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, we describe a Fingerprint Renovation System (FRS) based on crowdsourcing, which avoids the use of manual labour to obtain the up-to-date fingerprint status. Extended Kalman Filter (EKF) and Gaussian Process Regression (GPR) in FRS are combined to calculate the current state based on the original fingerprinting radio map. In this system, a method of subset acquisition also makes an immediate impression to reduce the huge computation caused by too many reference points (RPs). Meanwhile, adjusted cosine similarity (ACS) is employed in the online phase to solve the issue of outliers produced by cosine similarity. Both experiments and analytical simulation in a real Wireless Fidelity (Wi-Fi) environment indicate the usefulness of our system to significant performance improvements. The results show that FRS improves the accuracy by 19.6% in the surveyed area compared to the radio map un-renovated. Moreover, the proposed subset algorithm can bring less computation. PMID:29361805
Yang, Junhua; Li, Yong; Cheng, Wei; Liu, Yang; Liu, Chenxi
Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, we describe a Fingerprint Renovation System (FRS) based on crowdsourcing, which avoids the use of manual labour to obtain the up-to-date fingerprint status. Extended Kalman Filter (EKF) and Gaussian Process Regression (GPR) in FRS are combined to calculate the current state based on the original fingerprinting radio map. In this system, a method of subset acquisition also makes an immediate impression to reduce the huge computation caused by too many reference points (RPs). Meanwhile, adjusted cosine similarity (ACS) is employed in the online phase to solve the issue of outliers produced by cosine similarity. Both experiments and analytical simulation in a real Wireless Fidelity (Wi-Fi) environment indicate the usefulness of our system to significant performance improvements. The results show that FRS improves the accuracy by 19.6% in the surveyed area compared to the radio map un-renovated. Moreover, the proposed subset algorithm can bring less computation.
Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O
One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Full Text Available A novel fiber-optic based earth pressure sensor (FPS with an adjustable measurement range and high sensitivity is developed to measure earth pressures for civil infrastructures. The new FPS combines a cantilever beam with fiber Bragg grating (FBG sensors and a flexible membrane. Compared with a traditional pressure transducer with a dual diaphragm design, the proposed FPS has a larger measurement range and shows high accuracy. The working principles, parameter design, fabrication methods, and laboratory calibration tests are explained in this paper. A theoretical solution is derived to obtain the relationship between the applied pressure and strain of the FBG sensors. In addition, a finite element model is established to analyze the mechanical behavior of the membrane and the cantilever beam and thereby obtain optimal parameters. The cantilever beam is 40 mm long, 15 mm wide, and 1 mm thick. The whole FPS has a diameter of 100 mm and a thickness of 30 mm. The sensitivity of the FPS is 0.104 kPa/με. In addition, automatic temperature compensation can be achieved. The FPS’s sensitivity, physical properties, and response to applied pressure are extensively examined through modeling and experiments. The results show that the proposed FPS has numerous potential applications in soil pressure measurement.
Jesús Crespo Cuaresma; Anna Orthofer
Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...
Schartner, Alina; Young, Tony Johnstone
Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…
Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O
The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.
Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul
This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum rates of base salary and adjusted salary. 9901.312 Section 9901.312 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES... DEFENSE NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Pay and Pay Administration Overview of Pay System § 9901...
Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo
We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differences...
Crépey, Stéphane; Macrina, Andrea; Nguyen, Tuyet Mai
We develop a multi-curve term structure set-up in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to London Interbank Offer Rate swaptions data and show that a rational two-factor log-normal multi-curve model is sufficient to match market da...... with regulatory obligations. In order to compute counterparty-risk valuation adjustments, such as credit valuation adjustment, we show how default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....... with accuracy. We elucidate the relationship between the models developed and calibrated under a risk-neutral measure Q and their consistent equivalence class under the real-world probability measure P. The consistent P-pricing models are applied to compute the risk exposures which may be required to comply...
Mimi Hafizah Abdullah; Hanani Farhah Harun; Nik Ruzni Nik Idris
With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the im...
In this note we give pricing formulas for different instruments linked to rate futures (euro-dollar futures). We provide the future price including the convexity adjustment and the exact dates. Based on that result we price options on futures, including the mid-curve options.
Lindhagen, Lars; Darkahi, Bahman; Sandblom, Gabriel; Berglund, Lars
Funnel plots are widely used to visualize grouped data, for example, in institutional comparison. This paper extends the concept to a multi-level setting, displaying one level at a time, adjusted for the other levels, as well as for covariates at all levels. These level-adjusted funnel plots are based on a Markov chain Monte Carlo fit of a random effects model, translating the estimated model parameters to predicted marginal expectations. Working within the estimation framework, we accommodate outlying institutions using heavy-tailed random effects distributions. We also develop computer-efficient methods to compute predicted probabilities in the case of dichotomous outcome data and various random effect distributions. We apply the method to a data set on prophylactic antibiotics in gallstone surgery. Copyright © 2014 John Wiley & Sons, Ltd.
Full Text Available The current paper mainly focuses on finding a more appropriate way to enhance the fan performance at off-design conditions. The centrifugal fan (CF based on flap-adjustment (FA has been investigated through theoretical, experimental, and finite element methods. To obtain a more predominant performance of CF from the different adjustments, we carried out a comparative analysis on FA and leading-adjustment (LA in aerodynamic performances, which included the adjusted angle of blades, total pressure, efficiency, system-efficiency, adjustment-efficiency, and energy-saving rate. The contribution of this paper is the integrated performance curve of the CF. Finally, the results showed that the effects of FA and LA on economic performance and energy savings of the fan varied with the blade angles. Furthermore, FA was feasible, which is more sensitive than LA. Moreover, the CF with FA offered a more extended flow-range of high economic characteristic in comparison with LA. Finally, when the operation flow-range extends, energy-saving rate of the fan with FA would have improvement.
Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.
Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo
We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differences...... considered for their appropriateness, if the risk estimates are to be used for informing risk-based surveillance systems....
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth. The paper proposes the presentation of the R.M. Solow adjusted model with specific simulation characteristics and economic growth scenario. Considering these aspects, there are presented the values obtained at the economy level, behind the simulations, about the ratio Capital on the output volume, Output volume on employee, equal with the current labour efficiency, as well as the Labour efficiency value.
Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.
Full Text Available It is a crucial and difficult problem in railway transportation dispatch mechanism to automatically compile train operation adjustment (TOA plan with computer to ensure safe, fast, and punctual running of trains. Based on the proposed model of TOA under the conditions of railway network (RN, we take minimum travel time of train as objective function of optimization, and after fast preliminary evaluation calculation on it, we introduce the theory and method of ordinal optimization (OO to solve it. This paper discusses in detail the implementation steps of OO algorithm. A practical calculation example of Datong-Qinhuangdao (hereinafter referred to as Da-Qin heavy haul railway is performed with the proposed algorithm to prove that OO can ensure getting good enough solution with high probability. Particularly, for complex optimization problems with large amount of calculation, OO can greatly increase computational efficiency, and it can save at least one order of magnitude of calculation amount than general heuristic algorithm. Thus, the proposed algorithm can well satisfy the requirements in engineering.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Wilson, Richard; Goodacre, Steve W; Klingbajl, Marcin; Kelly, Anne-Maree; Rainer, Tim; Coats, Tim; Holloway, Vikki; Townend, Will; Crane, Steve
Risk-adjusted mortality rates can be used as a quality indicator if it is assumed that the discrepancy between predicted and actual mortality can be attributed to the quality of healthcare (ie, the model has attributional validity). The Development And Validation of Risk-adjusted Outcomes for Systems of emergency care (DAVROS) model predicts 7-day mortality in emergency medical admissions. We aimed to test this assumption by evaluating the attributional validity of the DAVROS risk-adjustment model. We selected cases that had the greatest discrepancy between observed mortality and predicted probability of mortality from seven hospitals involved in validation of the DAVROS risk-adjustment model. Reviewers at each hospital assessed hospital records to determine whether the discrepancy between predicted and actual mortality could be explained by the healthcare provided. We received 232/280 (83%) completed review forms relating to 179 unexpected deaths and 53 unexpected survivors. The healthcare system was judged to have potentially contributed to 10/179 (8%) of the unexpected deaths and 26/53 (49%) of the unexpected survivors. Failure of the model to appropriately predict risk was judged to be responsible for 135/179 (75%) of the unexpected deaths and 2/53 (4%) of the unexpected survivors. Some 10/53 (19%) of the unexpected survivors died within a few months of the 7-day period of model prediction. We found little evidence that deaths occurring in patients with a low predicted mortality from risk-adjustment could be attributed to the quality of healthcare provided. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Lee, J.-I.; Pyo, Soonjae; Kim, Min-Ook; Kim, Jongbaeg
We demonstrate a highly sensitive force sensor based on self-adjusting carbon nanotube (CNT) arrays. Aligned CNT arrays are directly synthesized on silicon microstructures by a space-confined growth technique which enables a facile self-adjusting contact. To afford flexibility and softness, the patterned microstructures with the integrated CNTs are embedded in polydimethylsiloxane structures. The sensing mechanism is based on variations in the contact resistance between the facing CNT arrays under the applied force. By finite element analysis, proper dimensions and positions for each component are determined. Further, high sensitivities up to 15.05%/mN of the proposed sensors were confirmed experimentally. Multidirectional sensing capability could also be achieved by designing multiple sets of sensing elements in a single sensor. The sensors show long-term operational stability, owing to the unique properties of the constituent CNTs, such as outstanding mechanical durability and elasticity.
Mendoza-Dominguez, A.; Russell, A.G.
Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analysis, develops spatially and temporally varying concentration and sensitivity fields that account for chemical and physical processing, and receptor analysis is used to adjust source strengths. Proposed changes to domain-wide NO x , volatile organic compounds (VOCs) and CO emissions from anthropogenic sources and for VOC emissions from biogenic sources were estimated, as well as modifications to sources based on their spatial location (urban vs. rural areas). In general, domain-wide anthropogenic VOC emissions were increased approximately two times their base case level to best match observations, domain-wide anthropogenic NO x and biogenic VOC emissions (BEIS2 estimates) remained close to their base case value and domain-wide CO emissions were decreased. Adjustments for anthropogenic NO x emissions increased their level of uncertainty when adjustments were computed for mobile and area sources (or urban and rural sources) separately, due in part to the poor spatial resolution of the observation field of nitrogen-containing species. Estimated changes to CO emissions also suffer from poor spatial resolution of the measurements. Results suggest that rural anthropogenic VOC emissions appear to be severely underpredicted. The FDDA approach was also used to investigate the speciation profiles of VOC emissions, and results warrant revision of these profiles. In general, the results obtained here are consistent with what are viewed as the current deficiencies in emissions inventories as derived by other top-down techniques, such as tunnel studies and analysis of ambient measurements. (Author)
Zhang, Ke; Jiang, Bin; Shi, Peng
In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.
Zhang, Y J; Xue, F X; Bai, Z P
The impact of maternal air pollution exposure on offspring health has received much attention. Precise and feasible exposure estimation is particularly important for clarifying exposure-response relationships and reducing heterogeneity among studies. Temporally-adjusted land use regression (LUR) models are exposure assessment methods developed in recent years that have the advantage of having high spatial-temporal resolution. Studies on the health effects of outdoor air pollution exposure during pregnancy have been increasingly carried out using this model. In China, research applying LUR models was done mostly at the model construction stage, and findings from related epidemiological studies were rarely reported. In this paper, the sources of heterogeneity and research progress of meta-analysis research on the associations between air pollution and adverse pregnancy outcomes were analyzed. The methods of the characteristics of temporally-adjusted LUR models were introduced. The current epidemiological studies on adverse pregnancy outcomes that applied this model were systematically summarized. Recommendations for the development and application of LUR models in China are presented. This will encourage the implementation of more valid exposure predictions during pregnancy in large-scale epidemiological studies on the health effects of air pollution in China.
Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.
Crepey, Stephane; Macrina, Andrea; Nguyen, Tuyet Mai
We develop a multi-curve term structure setup in which the modelling ingredients are expressed by rational functionals of Markov processes. We calibrate to LIBOR swaptions data and show that a rational two-factor lognormal multi-curve model is sufficient to match market data with accuracy. We elu...... obligations. In order to compute counterparty-risk valuation adjustments, such as CVA, we show how positive default intensity processes with rational form can be derived. We flesh out our study by applying the results to a basis swap contract....
Somovilla Gómez, Fátima
The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the
White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.
As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.
Powroźnik, P.; Szulim, R.
In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.
Meldgaard, A.; Nielsen, L.; Iaffaldano, G.
The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local
Full Text Available Investasi mempunyai karakteristik antara return dan resiko. Pembentukan portofolio optimal digunakan untuk memaksimalkan keuntungan dan meminimumkan resiko. Liquidity Adjusted Capital Asset Pricing Model (LCAPM merupakan metode pengembangan baru dari CAPM yang dipengaruhi likuiditas. Indikator likuiditas apabila digabungkan dengan metode CAPM dapat membantu memaksimalkan return dan meminimumkan resiko. Tujuan penelitian adalah membandingkan expected retun dan resiko saham serta mengetahui proporsi pada portofolio optimal. Sampel yang digunakan merupakan saham JII (Jakarta Islamic Index periode Januari 2013 – November 2014. Hasil penelitian menunjukkan bahwa expected return portofolio LCAPM sebesar 0,0956 dengan resiko 0,0043 yang membentuk proporsi saham AALI (55,19% dan saham PGAS (44,81%.
García-Fernández, Juan Antonio; Jurado-Navas, Antonio; Fernández-Navarro, Mariano
of the geographical positions associated to all reported mobile terminals will be remarkable improved independent on the geolocation technique employed. The proposed method will move each position estimate towards a previously calculated area of confidence in a smart manner. This reduced area of confidence......The present paper details a technique for adjusting in a smart manner the position estimates of any user equipment given by different geolocation/positioning methods in a wireless radiofrequency communication network based on different strategies (observed time difference of arrival , angle...... is generated to guarantee that the real position of any mobile terminal is inside it with a (Formula presented.) of probability of certainty....
Hao, Zi-long; Liu, Yong; Chen, Ruo-wang
In view of the histogram equalizing algorithm to enhance image in digital image processing, an Infrared Image Gray adaptive adjusting Enhancement Algorithm Based on Gray Redundancy Histogram-dealing Technique is proposed. The algorithm is based on the determination of the entire image gray value, enhanced or lowered the image's overall gray value by increasing appropriate gray points, and then use gray-level redundancy HE method to compress the gray-scale of the image. The algorithm can enhance image detail information. Through MATLAB simulation, this paper compares the algorithm with the histogram equalization method and the algorithm based on gray redundancy histogram-dealing technique , and verifies the effectiveness of the algorithm.
Full Text Available We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods, the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally
Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu
We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM) using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods), the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally better than for
Karbalaee, Negar; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan
This study explores using Passive Microwave (PMW) rainfall estimation for spatial and temporal adjustment of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The PERSIANN-CCS algorithm collects information from infrared images to estimate rainfall. PERSIANN-CCS is one of the algorithms used in the Integrated Multisatellite Retrievals for GPM (Global Precipitation Mission) estimation for the time period PMW rainfall estimations are limited or not available. Continued improvement of PERSIANN-CCS will support Integrated Multisatellite Retrievals for GPM for current as well as retrospective estimations of global precipitation. This study takes advantage of the high spatial and temporal resolution of GEO-based PERSIANN-CCS estimation and the more effective, but lower sample frequency, PMW estimation. The Probability Matching Method (PMM) was used to adjust the rainfall distribution of GEO-based PERSIANN-CCS toward that of PMW rainfall estimation. The results show that a significant improvement of global PERSIANN-CCS rainfall estimation is obtained.
Full Text Available Objective This study examines the genetic factors influencing the phenotypes (four economic traits:oleic acid [C18:1], monounsaturated fatty acids, carcass weight, and marbling score of Hanwoo. Methods To enhance the accuracy of the genetic analysis, the study proposes a new statistical model that excludes environmental factors. A statistically adjusted, analysis of covariance model of environmental and genetic factors was developed, and estimated environmental effects (covariate effects of age and effects of calving farms were excluded from the model. Results The accuracy was compared before and after adjustment. The accuracy of the best single nucleotide polymorphism (SNP in C18:1 increased from 60.16% to 74.26%, and that of the two-factor interaction increased from 58.69% to 87.19%. Also, superior SNPs and SNP interactions were identified using the multifactor dimensionality reduction method in Table 1 to 4. Finally, high- and low-risk genotypes were compared based on their mean scores for each trait. Conclusion The proposed method significantly improved the analysis accuracy and identified superior gene-gene interactions and genotypes for each of the four economic traits of Hanwoo.
Zhou Shumin; Sun Yamin; Tang Bin
In order to enhance the time synchronization quality of the distributed system, a time synchronization algorithm of distributed system based on server time-revise and workstation self-adjust is proposed. The time-revise cycle and self-adjust process is introduced in the paper. The algorithm reduces network flow effectively and enhances the quality of clock-synchronization. (authors)
Borja, Susan E.; Callahan, Jennifer L.
This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…
Doo Yong Choi
Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.
Ion Gh. Rosca
Full Text Available Besides the models of M. Keynes, R.F. Harrod, E. Domar, D. Romer, Ramsey-Cass-Koopmans model etc., the R.M. Solow model is part of the category which characterizes the economic growth.The paper aim is the economic growth measurement and add-on of the R.M. Solow adjusted model.
Bonett, Douglas G.; Price, Robert M.
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Yao, Zili; Li, Jun; Dong, Gaojie
When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.
Benfeng, Zhang; Huafeng, Li; Sunan, Li
To meet the need of multi-channel DC power supply to activate multiple macro fiber composite (MFC) material simultaneously, a novel multi-channel adjustable DC supply using single-input single-output transformer based on spectral separation is proposed. A hybrid signal containing multiple frequency bands is boosted to obtain a high-voltage signal without bands change. Several frequency selection circuits are then used to separate individual signals in different frequency band from the high-voltage signal. Finally, these signals are rectified and filtered respectively to obtain multiple channel DC voltages. The feasibility of the proposed scheme is analyzed theoretically and verified by simulation. The hybrid signal containing multiple frequency bands is constructed by MCU (Micro Control Unit) and boosted using push-pull boost circuit. Low-pass, band-pass and high-pass frequency selection circuits are used to obtain the individual high-voltage signal in different frequency bands, and the amplitude frequency response characteristics of these filters are simulated using PSpice. Experimental results prove that each part of the scheme runs reliable and the output is stable and adjustable.
Olejnik, Michał; Szewc, Kamil; Pozorski, Jacek
Due to the Lagrangian nature of Smoothed Particle Hydrodynamics (SPH), the adaptive resolution remains a challenging task. In this work, we first analyse the influence of the simulation parameters and the smoothing length on solution accuracy, in particular in high strain regions. Based on this analysis we develop a novel approach to dynamically adjust the kernel range for each SPH particle separately, accounting for the local flow kinematics. We use the Okubo-Weiss parameter that distinguishes the strain and vorticity dominated regions in the flow domain. The proposed development is relatively simple and implies only a moderate computational overhead. We validate the modified SPH algorithm for a selection of two-dimensional test cases: the Taylor-Green flow, the vortex spin-down, the lid-driven cavity and the dam-break flow against a sharp-edged obstacle. The simulation results show good agreement with the reference data and improvement of the long-term accuracy for unsteady flows. For the lid-driven cavity case, the proposed dynamical adjustment remedies the problem of tensile instability (particle clustering).
Pompili, Cecilia; Shargall, Yaron; Decaluwe, Herbert; Moons, Johnny; Chari, Madhu; Brunelli, Alessandro
The objective of this study was to evaluate the performance of 3 thoracic surgery centres using the Eurolung risk models for morbidity and mortality. This was a retrospective analysis performed on data collected from 3 academic centres (2014-2016). Seven hundred and twenty-one patients in Centre 1, 857 patients in Centre 2 and 433 patients in Centre 3 who underwent anatomical lung resections were analysed. The Eurolung1 and Eurolung2 models were used to predict risk-adjusted cardiopulmonary morbidity and 30-day mortality rates. Observed and risk-adjusted outcomes were compared within each centre. The observed morbidity of Centre 1 was in line with the predicted morbidity (observed 21.1% vs predicted 22.7%, P = 0.31). Centre 2 performed better than expected (observed morbidity 20.2% vs predicted 26.7%, P < 0.001), whereas the observed morbidity of Centre 3 was higher than the predicted morbidity (observed 41.1% vs predicted 24.3%, P < 0.001). Centre 1 had higher observed mortality when compared with the predicted mortality (3.6% vs 2.1%, P = 0.005), whereas Centre 2 had an observed mortality rate significantly lower than the predicted mortality rate (1.2% vs 2.5%, P = 0.013). Centre 3 had an observed mortality rate in line with the predicted mortality rate (observed 1.4% vs predicted 2.4%, P = 0.17). The observed mortality rates in the patients with major complications were 30.8% in Centre 1 (versus predicted mortality rate 3.8%, P < 0.001), 8.2% in Centre 2 (versus predicted mortality rate 4.1%, P = 0.030) and 9.0% in Centre 3 (versus predicted mortality rate 3.5%, P = 0.014). The Eurolung models were successfully used as risk-adjusting instruments to internally audit the outcomes of 3 different centres, showing their applicability for future quality improvement initiatives. © The Author(s) 2018. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Karnon, J; Ali Afzali, H Haji; Gray, J; Holton, C; Banham, D; Beilby, J
Controlled evaluations are subject to uncertainty regarding their replication in the real world, particularly around systems of service provision. Using routinely collected data, we undertook a risk adjusted cost-effectiveness (RAC-E) analysis of alternative applied models of primary health care for the management of obese adult patients. Models were based on the reported level of involvement of practice nurses (registered or enrolled nurses working in general practice) in the provision of clinical-based activities. Linked, routinely collected clinical data describing clinical outcomes (weight, BMI, and obesity-related complications) and resource use (primary care, pharmaceutical, and hospital resource use) were collected. Potential confounders were controlled for using propensity weighted regression analyses. Relative to low level involvement of practice nurses in the provision of clinical-based activities to obese patients, high level involvement was associated with lower costs and better outcomes (more patients losing weight, and larger mean reductions in BMI). Excluding hospital costs, high level practice nurse involvement was associated with slightly higher costs. Incrementally, the high level model gets one additional obese patient to lose weight at an additional cost of $6,741, and reduces mean BMI by an additional one point at an additional cost of $563 (upper 95% confidence interval $1,547). Converted to quality adjusted life year (QALY) gains, the results provide a strong indication that increased involvement of practice nurses in clinical activities is associated with additional health benefits that are achieved at reasonable additional cost. Dissemination activities and incentives are required to encourage general practices to better integrate practice nurses in the active provision of clinical services. Copyright © 2013 The Obesity Society.
Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu
Projections of future climate change have been increasingly called for lately, as the reality of climate change has been gradually accepted and societies and governments have started to plan upcoming mitigation and adaptation policies. In mountain regions such as the Alps or the Pyrenees, where winter tourism and hydropower production are large contributors to the regional revenue, particular attention is brought to current and future snow availability. The question of the vulnerability of mountain ecosystems as well as the occurrence of climate-related hazards such as avalanches and debris-flows is also under consideration. In order to generate projections of snow conditions, however, downscaling global climate models (GCMs) by using regional climate models (RCMs) is not sufficient to capture the fine-scale processes and thresholds at play. In particular, the altitudinal resolution matters, since the phase of precipitation is mainly controlled by the temperature which is altitude-dependent. Simulations from GCMs and RCMs moreover suffer from biases compared to local observations, due to their rather coarse spatial and altitudinal resolution, and often provide outputs at too coarse time resolution to drive impact models. RCM simulations must therefore be adjusted using empirical-statistical downscaling and error correction methods, before they can be used to drive specific models such as energy balance land surface models. In this study, time series of hourly temperature, precipitation, wind speed, humidity, and short- and longwave radiation were generated over the Pyrenees and the French Alps for the period 1950-2100, by using a new approach (named ADAMONT for ADjustment of RCM outputs to MOuNTain regions) based on quantile mapping applied to daily data, followed by time disaggregation accounting for weather patterns selection. We first introduce a thorough evaluation of the method using using model runs from the ALADIN RCM driven by a global reanalysis over the
aggressive economic develop- the diversity of some recent additions: ment effort that began in the 1970s and acceler- 0 Fantasia Confections , Inc., which... Confections , a San Francisco in the Chamber of Commerce, then had city offi- based supplier of elegant desserts, a videoteam ces, and is now a private, non
Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H
Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.
Cupelli, Daniela; Pasquale Nicoletta, Fiore; Vivacqua, Marco; Formoso, Patrizia [Dipartimento di Scienze Farmaceutiche, Universita della Calabria, 87036 Rende (CS) (Italy); Manfredi, Sabrina; De Filpo, Giovanni; Chidichimo, Giuseppe [Dipartimento di Chimica, Universita della Calabria, 87036 Rende (CS) (Italy)
The control of sunlight can be achieved either by electrochromic or polymer-dispersed liquid crystal (PDLC) smart windows. We have recently shown that it is possible to homeotropically align fluid mixtures of low molecular mass liquid crystal with a negative dielectric anisotropy, and a liquid crystalline monomer, in order to obtain electrically switchable chromogenic devices. They are new materials useful for external glazing. In fact, they are not affected by the classical drawbacks of PDLCs. In this paper we present a new self-switchable glazing technology based on the light-controlled transmittance in a PDLC device. The self-adjusting chromogenic material, which we obtain, is able to self-increase its scattering as a function of the impinging light intensity. The relationship between the electro-optical response and the physical-chemical properties of the components has been also investigated. (author)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...
Kiss, S.; Sarfraz, M.
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Full Text Available Metallic electric split-ring resonators (SRRs with featured size in micrometer scale, which are connected by thin metal wires, are patterned to form a periodically distributed planar array. The arrayed metallic SRRs are fabricated on an n-doped gallium arsenide (n-GaAs layer grown directly over a semi-insulating gallium arsenide (SI-GaAs wafer. The patterned metal microstructures and n-GaAs layer construct a Schottky diode, which can support an external voltage applied to modify the device properties. The developed architectures present typical functional metamaterial characters, and thus is proposed to reveal voltage adjusting characteristics in the transmission of terahertz waves at normal incidence. We also demonstrate the terahertz transmission characteristics of the voltage controlled Fabry-Pérot-based metamaterial device, which is composed of arrayed metallic SRRs. To date, many metamaterials developed in earlier works have been used to regulate the transmission amplitude or phase at specific frequencies in terahertz wavelength range, which are mainly dominated by the inductance-capacitance (LC resonance mechanism. However, in our work, the external voltage controlled metamaterial device is developed, and the extraordinary transmission regulation characteristics based on both the Fabry-Pérot (FP resonance and relatively weak surface plasmon polariton (SPP resonance in 0.025-1.5 THz range, are presented. Our research therefore shows a potential application of the dual-mode-resonance-based metamaterial for improving terahertz transmission regulation.
Haji Ali Afzali, H; Gray, J; Beilby, J; Holton, C; Banham, D; Karnon, J
To determine the cost-effectiveness of alternative models of practice nurse involvement in the management of type 2 diabetes within the primary care setting. Linked routinely collected clinical data and resource use (general practitioner visits, hospital services and pharmaceuticals) were used to undertake a risk-adjusted cost-effectiveness analysis of alternative models of care for the management of diabetes patients. These models were based on the reported level of involvement of practice nurses in the provision of clinical-based activities. Potential confounders were controlled for by using propensity score-weighted regression analyses. The impact of alternative models of care on outcomes and costs was measured and incremental cost-effectiveness estimated. The uncertainty around the estimates of cost-effectiveness was illustrated through bootstrapping. Although the difference in total cost between two models of care was not statistically significant, the high-level model was associated with better outcomes (larger mean reductions in HbA(1c)). The upper 95% confidence intervals showed that the incremental cost per 1% decrease in HbA(1c) is only $454, and per one additional patient to achieve an HbA(1c) value of less than 53 mmol/mol (7.0%) is $323. Further analyses showed little uncertainty surrounding the decision to adopt the high-level model. The results provide a strong indication that the high-level model is a cost-effective way of managing diabetes patients. Our findings highlight the need for effective incentives to encourage general practices to better integrate practice nurses in the provision of clinical services. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.
Root, Bart; Tarasov, Lev; van der Wal, Wouter
The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.
Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Harry, Herbert H.
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
Domijan, Alexander, Jr.; Buchh, Tariq Aslam
A study of various aspects of Adjustable Speed Drives (ASD) is presented. A summary of the relative merits of different ASD systems presently in vogue is discussed. The advantages of using microcomputer based ASDs is now widely understood and accepted. Of the three most popular drive systems, namely the Induction Motor Drive, Switched Reluctance Motor Drive and Brushless DC Motor Drive, any one may be chosen. The choice would depend on the nature of the application and its requirements. The suitability of the above mentioned drive systems for a photovoltaic array driven ASD for an aerospace application are discussed. The discussion is based on the experience of the authors, various researchers and industry. In chapter 2 a PV array power supply scheme has been proposed, this scheme will have an enhanced reliability in addition to the other known advantages of the case where a stand alone PV array is feeding the heat pump. In chapter 3 the results of computer simulation of PV array driven induction motor drive system have been included. A discussion on these preliminary simulation results have also been included in this chapter. Chapter 4 includes a brief discussion on various control techniques for three phase induction motors. A discussion on different power devices and their various performance characteristics is given in Chapter 5.
Kendall, W.L.; Hines, J.E.; Nichols, J.D.
Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.
Soares, Ana Paula; Guisande, M Adelina; Diniz, António M; Almeida, Leandro S
This article presents a model of interaction of personal and contextual variables in the prediction of academic performance and psychosocial development of Portuguese college students. The sample consists of 560 first-year college students of the University of Minho. The path analysis results suggest that initial expectations of the students' involvement in academic life constituted an effective predictor of their involvement during their first year; as well as the social climate of the classroom influenced their involvement, well-being and levels of satisfaction obtained. However, these relationships were not strong enough to influence the criterion variables integrated in the model (academic performance and psychosocial development). Academic performance was predicted by the high school grades and college entrance examination scores, and the level of psychosocial development was determined by the level of development showed at the time they entered college. Though more research is needed, these results point to the importance of students' pre-college characteristics when we are considering the quality of their college adjustment process.
King, Daniel W; King, Lynda A; Park, Crystal L; Lee, Lewina O; Kaiser, Anica Pless; Spiro, Avron; Moore, Jeffrey L; Kaloupek, Danny G; Keane, Terence M
A longitudinal lifespan model of factors contributing to later-life positive adjustment was tested on 567 American repatriated prisoners from the Vietnam War. This model encompassed demographics at time of capture and attributes assessed after return to the U.S. (reports of torture and mental distress) and approximately 3 decades later (later-life stressors, perceived social support, positive appraisal of military experiences, and positive adjustment). Age and education at time of capture and physical torture were associated with repatriation mental distress, which directly predicted poorer adjustment 30 years later. Physical torture also had a salutary effect, enhancing later-life positive appraisals of military experiences. Later-life events were directly and indirectly (through concerns about retirement) associated with positive adjustment. Results suggest that the personal resources of older age and more education and early-life adverse experiences can have cascading effects over the lifespan to impact well-being in both positive and negative ways.
Tron Anders Moger
Full Text Available Objectives: Health-care performance comparisons across countries are gaining popularity. In such comparisons, the risk adjustment methodology plays a key role for meaningful comparisons. However, comparisons may be complicated by the fact that not all participating countries are allowed to share their data across borders, meaning that only simple methods are easily used for the risk adjustment. In this study, we develop a pragmatic approach using patient-level register data from Finland, Hungary, Italy, Norway, and Sweden. Methods: Data on acute myocardial infarction patients were gathered from health-care registers in several countries. In addition to unadjusted estimates, we studied the effects of adjusting for age, gender, and a number of comorbidities. The stability of estimates for 90-day mortality and length of stay of the first hospital episode following diagnosis of acute myocardial infarction is studied graphically, using different choices of reference data. Logistic regression models are used for mortality, and negative binomial models are used for length of stay. Results: Results from the sensitivity analysis show that the various models of risk adjustment give similar results for the countries, with some exceptions for Hungary and Italy. Based on the results, in Finland and Hungary, the 90-day mortality after acute myocardial infarction is higher than in Italy, Norway, and Sweden. Conclusion: Health-care registers give encouraging possibilities to performance measurement and enable the comparison of entire patient populations between countries. Risk adjustment methodology is affected by the availability of data, and thus, the building of risk adjustment methodology must be transparent, especially when doing multinational comparative research. In that case, even basic methods of risk adjustment may still be valuable.
Hirdes John P
Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did
Dalby, Dawn M; Hirdes, John P; Fries, Brant E
Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the
Li, Jun; Li, Huicong; Li, Weiwei
The rapid development of heat and power generation in large power plant has caused tremendous constraints on the load adjustment of power grids and power plants. By introducing the thermodynamic system of thermal power unit, the relationship between thermal power extraction steam and unit’s load has analyzed and calculated. The practical application results show that power capability of the unit affected by extraction and it is not conducive to adjust the grid frequency. By monitoring the load adjustment capacity of thermal power units, especially the combined heat and power generating units, the upper and lower limits of the unit load can be dynamically adjusted by the operator on the grid side. The grid regulation and control departments can effectively control the load adjustable intervals of the operating units and provide reliable for the cooperative action of the power grid and power plants, to ensure the safety and stability of the power grid.
Punamäki, R L; Qouta, S; el Sarraj, E
The relations between traumatic events, perceived parenting styles, children's resources, political activity, and psychological adjustment were examined among 108 Palestinian boys and girls of 11-12 years of age. The results showed that exposure to traumatic events increased psychological adjustment problems directly and via 2 mediating paths. First, the more traumatic events children had experienced, the more negative parenting they experienced. And, the poorer they perceived parenting, the more they suffered from high neuroticism and low self-esteem. Second, the more traumatic events children had experienced, the more political activity they showed, and the more active they were, the more they suffered from psychological adjustment problems. Good perceived parenting protected children's psychological adjustment by making them less vulnerable in two ways. First, traumatic events decreased their intellectual, creative, and cognitive resources, and a lack of resources predicted many psychological adjustment problems in a model excluding perceived parenting. Second, political activity increased psychological adjustment problems in the same model, but not in the model including good parenting.
Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.
Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.
Kunusch, C.; Puleston, P.F.; More, J.J. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Consejo de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina); Husar, A. [Institut de Robotica i Informatica Industrial (CSIC-UPC), c/ Llorens i Artigas 4-6, 08028 Barcelona (Spain); Mayosky, M.A. [LEICI, Departamento de Electrotecnia, Universidad Nacional de La Plata, calle 1 esq. 47 s/n, 1900 La Plata (Argentina); Comision de Investigaciones Cientificas (CIC), Provincia de Buenos Aires (Argentina)
In the context of fuel cell stack control a mayor challenge is modeling the interdependence of various complex subsystem dynamics. In many cases, the states interaction is usually modeled through several look-up tables, decision blocks and piecewise continuous functions. Many internal variables are inaccessible for measurement and cannot be used in control algorithms. To make significant contributions in this area, it is necessary to develop reliable models for control and design purposes. In this paper, a linear model based on experimental identification of a 7-cell stack was developed. The procedure followed to obtain a linear model of the system consisted in performing spectroscopy tests of four different single-input single-output subsystems. The considered inputs for the tests were the stack current and the cathode oxygen flow rate, while the measured outputs were the stack voltage and the cathode total pressure. The resulting model can be used either for model-based control design or for on-line analysis and errors detection. (author)
Sodikin; Sitorus, S. R. P.; Prasetyo, L. B.; Kusmana, C.
Indramayu Regency is the area that has the largest mangrove in West Java. According to the environment and forestry ministry of Indramayu district will be targeted to be the central area of mangrove Indonesia. Mangroves in the regency from the 1990s have experienced a significant decline caused by the conversion of mangrove land into ponds and settlements. To stop the mangrove decline that continues to occur, it is necessary to rehabilitate mangroves in the area. The rehabilitation of mangrove should be in the area suitable for mangrove growth and what kind of vegetation analysis is appropriate to plant the area, so the purpose of this research is to analyze the suitability of land for mangrove in Indramayu Regency. This research uses geographic information system with overlay technique, while the data used in this research is tidal map of sea water, salintas map, land ph map, soil texture map, sea level rise map, land use map, community participation level map, and Map of organic soil. Then overlay and adjusted to matrix environmental parameters for mangrove growth. Based on the results of the analysis is known that in Indramayu District there are 5 types of mangroves that fit among others Bruguera, Soneratia, Nypah, Rhizophora, and Avicennia. The area of each area is Bruguera with an area of 6260 ha, 2958 ha, nypah 1756 ha, Rhizophora 936, and Avicennia 433 ha.
Zhang, Yumin; Han, Xueshan; Wang, Yong; Zhang, Li; Yang, Guangsen; Sun, Donglei; Wang, Bolun
 proposed a novel analysis and forecast method of electricity business expansion based on Seasonal Adjustment, we extend this work to include the effect the micro and macro aspects, respectively. From micro aspect, we introduce the concept of load factor to forecast the stable value of electricity consumption of single new consumer after the installation of new capacity of the high-voltage transformer. From macro aspects, considering the growth of business expanding is also stimulated by the growth of electricity sales, it is necessary to analyse the antecedent relationship between business expanding and electricity sales. First, forecast electricity consumption of customer group and release rules of expanding capacity, respectively. Second, contrast the degree of fitting and prediction accuracy to find out the antecedence relationship and analyse the reason. Also, it can be used as a contrast to observe the influence of customer group in different ranges on the prediction precision. Finally, Simulation results indicate that the proposed method is accurate to help determine the value of expanding capacity and electricity consumption.
Liu, Yingnan; Wang, Ke
The process of energy conservation and emission reduction in China requires the specific and accurate evaluation of the energy efficiency of the industry sector because this sector accounts for 70 percent of China's total energy consumption. Previous studies have used a “black box” DEA (data envelopment analysis) model to obtain the energy efficiency without considering the inner structure of the industry sector. However, differences in the properties of energy utilization (final consumption or intermediate conversion) in different industry departments may lead to bias in energy efficiency measures under such “black box” evaluation structures. Using the network DEA model and efficiency decomposition technique, this study proposes an adjusted energy efficiency evaluation model that can characterize the inner structure and associated energy utilization properties of the industry sector so as to avoid evaluation bias. By separating the energy-producing department and energy-consuming department, this adjusted evaluation model was then applied to evaluate the energy efficiency of China's provincial industry sector. - Highlights: • An adjusted network DEA (data envelopment analysis) model for energy efficiency evaluation is proposed. • The inner structure of industry sector is taken into account for energy efficiency evaluation. • Energy final consumption and energy intermediate conversion processes are separately modeled. • China's provincial industry energy efficiency is measured through the adjusted model.
Larreteguy, A.E; Mazufri, C.M
The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop
Dujardin, B; Dujardin, M; Hermans, I
Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.
Full Text Available The paper presents a method for measuring the amount of hoar frost formation in the recuperation channels of ventilation systems using the adjustable mathematical model of the hoar frost process. The principle is based on the fact that the contour of the adjustment of the hoar frost model is included in the measurement in accordance with the measured pressure drop, which is proportional to the amount of hoar frost. Unlike the known measurement methods, it is proposed to use the state variables of the mathematical model as the measured value. These state variables are not subject to non-deterministic interferences and random influences. The paper presents simulation results confirming the adequacy of the dynamic model. In conclusion, an example of the use of a recuperation channel in the defrost management system is given.
Full Text Available Validation results on the latest version of TaD model (TaDv2 show realistic reconstruction of the electron density profiles (EDPs with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.
Agbayani, Kristina A; Hiscock, Merrill
A previous study found that the Flynn effect accounts for 85% of the normative difference between 20- and 70-year-olds on subtests of the Wechsler intelligence tests. Adjusting scores for the Flynn effect substantially reduces normative age-group differences, but the appropriate amount of adjustment is uncertain. The present study replicates previous findings and employs two other methods of adjusting for the Flynn effect. Averaged across models, results indicate that the Flynn effect accounts for 76% of normative age-group differences on Wechsler IQ subtests. Flynn-effect adjustment reduces the normative age-related decline in IQ from 4.3 to 1.1 IQ points per decade.
Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi
Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue
The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Full Text Available We consider a phased reform of the economic model of Russia. In less than one century, Russia was in the extreme conditions of the model economy: the developed socialism (1917 and perfect capitalism (1991. Within each of them there was the instability of socio-economic development: economic recovery alternated recession and huge reserves of natural resources and to develop and use of land is not always effective. At each extremity of the selection was based largely on the current political aims and attitudes formed by various social groups. Russia achieved the economic situation and the prevailing socio-economic model of many subjected to fair criticism. To improve it proposes a phased approach to reform, when the main focus is on "how" to move to a new state. The approach is based on consideration of the scenario approach to the reform of the basic components of the economic model that involves the formation of a better scenario analysis and evaluation of the expert community the degree of closeness of planned versions of the model national development objectives of the country.
...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...
Damman, Olga C.; Stubbe, Janine H.; Hendriks, Michelle; Arah, Onyebuchi A.; Spreeuwenberg, Peter; Delnoij, Diana M. J.; Groenewegen, Peter P.
Background: Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for
Noma, Hisashi; Nagashima, Kengo; Maruo, Kazushi; Gosho, Masahiko; Furukawa, Toshi A
In network meta-analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between-studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta-analyses. In this article, we develop several improved inference methods for network meta-analyses to resolve these problems. We first introduce 2 efficient likelihood-based inference methods, the likelihood ratio test-based and efficient score test-based methods, in a general framework of network meta-analysis. Then, to improve the small-sample inferences, we developed improved higher-order asymptotic methods using Bartlett-type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case-by-case analyses and to permit flexible application to various statistical models network meta-analyses. These methods can also be straightforwardly applied to multivariate meta-regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood-based inference method. Applications to 2 network meta-analysis datasets are provided. Copyright © 2017 John Wiley & Sons, Ltd.
Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama
Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible
Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun
We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)
Cogburn, Courtney D; Chavous, Tabbye M; Griffin, Tiffany M
The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple regression analyses within gender groups indicated that among girls and boys, racial discrimination and gender discrimination predicted higher depressive symptoms and school importance and racial discrimination predicted self-esteem. Racial and gender discrimination were also negatively associated with grade point average among boys but were not significantly associated in girls' analyses. Significant gender discrimination X racial discrimination interactions resulted in the girls' models predicting psychological outcomes and in boys' models predicting academic achievement. Taken together, findings suggest the importance of considering gender- and race-related experiences in understanding academic and psychological adjustment among African American adolescents.
Chavous, Tabbye M.; Griffin, Tiffany M.
The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple regression analyses within gender groups indicated that among girls and boys, racial discrimination and gender discrimination predicted higher depressive symptoms and school importance and racial discrimination predicted self-esteem. Racial and gender discrimination were also negatively associated with grade point average among boys but were not significantly associated in girls’ analyses. Significant gender discrimination X racial discrimination interactions resulted in the girls’ models predicting psychological outcomes and in boys’ models predicting academic achievement. Taken together, findings suggest the importance of considering gender- and race-related experiences in understanding academic and psychological adjustment among African American adolescents. PMID:22837794
Ogawa, Masakatsu; Hiraguri, Takefumi; Nishimori, Kentaro; Takaya, Kazuhiro; Murakawa, Kazuo
This paper proposes and investigates a distributed adaptive contention window adjustment algorithm based on the transmission history for wireless LANs called the transmission-history-based distributed adaptive contention window adjustment (THAW) algorithm. The objective of this paper is to reduce the transmission delay and improve the channel throughput compared to conventional algorithms. The feature of THAW is that it adaptively adjusts the initial contention window (CWinit) size in the binary exponential backoff (BEB) algorithm used in the IEEE 802.11 standard according to the transmission history and the automatic rate fallback (ARF) algorithm, which is the most basic algorithm in automatic rate controls. This effect is to keep CWinit at a high value in a congested state. Simulation results show that the THAW algorithm outperforms the conventional algorithms in terms of the channel throughput and delay, even if the timer in the ARF is changed.
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V.; Tkachenko, N. P.
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description of data are schematically shown. The DaMoScope codes are freely available.
van der Wal, W.; Wu, P.; Sideris, M.; Wang, H.
GRACE satellite data offer homogeneous coverage of the area covered by the former Laurentide ice sheet. The secular gravity rate estimated from the GRACE data can therefore be used to constrain the ice loading history in Laurentide and, to a lesser extent, the mantle rheology in a GIA model. The objective of this presentation is to find a best fitting global ice model and use it to study how the ice model can be modified to fit a composite rheology, in which creep rates from a linear and non-linear rheology are added. This is useful because all the ice models constructed from GIA assume that mantle rheology is linear, but creep experiments on rocks show that nonlinear rheology may be the dominant mechanism in some parts of the mantle. We use CSR release 4 solutions from August 2002 to October 2008 with continental water storage effects removed by the GLDAS model and filtering with a destriping and Gaussian filter. The GIA model is a radially symmetric incompressible Maxwell Earth, with varying upper and lower mantle viscosity. Gravity rate misfit values are computed for with a range of viscosity values with the ICE-3G, ICE-4G and ICE-5G models. The best fit is shown for models with ICE-3G and ICE-4G, and the ICE-4G model is selected for computations with a so-called composite rheology. For the composite rheology, the Coupled Laplace Finite-Element Method is used to compute the GIA response of a spherical self-gravitating incompressible Maxwell Earth. The pre-stress exponent (A) derived from a uni- axial stress experiment is varied between 3.3 x 10-34/10-35/10-36 Pa-3s-1, the Newtonian viscosity η is varied between 1 and 3 x 1021 Pa-s, and the stress exponent is taken to be 3. Composite rheology in general results in geoid rates that are too small compared to GRACE observations. Therefore, simple modifications of the ICE-4G history are investigated by scaling ice heights or delaying glaciation. It is found that a delay in glaciation is a better way to adjust ice
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Cui, J; Trescher, K; Kratz, K; Jung, F; Hiebl, B; Lendlein, A
Acrylonitrile-based polymer systems (PAN) are comprehensively explored as versatile biomaterials having various potential biomedical applications, such as membranes for extra corporal devices or matrixes for guided skin reconstruction. The surface properties (e.g. hydrophilicity or charges) of such materials can be tailored over a wide range by variation of molecular parameters such as different co-monomers or their sequence structure. Some of these materials show interesting biofunctionalities such as capability for selective cell cultivation. So far, the majority of AN-based copolymers, which were investigated in physiological environments, were processed from the solution (e.g. membranes), as these materials are thermo-sensitive and might degrade when heated. In this work we aimed at the synthesis of hydrophobic, melt-processable AN-based copolymers with adjustable elastic properties for preparation of model scaffolds with controlled pore geometry and size. For this purpose a series of copolymers from acrylonitrile and n-butyl acrylate (nBA) was synthesized via free radical copolymerisation technique. The content of nBA in the copolymer varied from 45 wt% to 70 wt%, which was confirmed by 1H-NMR spectroscopy. The glass transition temperatures (Tg) of the P(AN-co-nBA) copolymers determined by differential scanning calorimetry (DSC) decreased from 58 degrees C to 20 degrees C with increasing nBA-content, which was in excellent agreement with the prediction of the Gordon-Taylor equation based on the Tgs of the homopolymers. The Young's modulus obtained in tensile tests was found to decrease significantly with rising nBA-content from 1062 MPa to 1.2 MPa. All copolymers could be successfully processed from the melt with processing temperatures ranging from 50 degrees C to 170 degrees C, whereby thermally induced decomposition was only observed at temperatures higher than 320 degrees C in thermal gravimetric analysis (TGA). Finally, the melt processed P
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…
Straatsma, G.; Gerrits, J.P.G.; Thissen, J.T.N.M.; Amsing, J.G.M.; Loeffen, H.; Griensven, van L.J.L.D.
The feasibility of adjusting individual composting processes to be able to produce the desired mass of compost of the required composition was evaluated. Data sets from experiments in tunnels were constructed and analyzed. Total mass and dry matter contents at the start and at the end of composting
Rowe, Sidney E.
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Ali, S.; Haider, S.K.F.
Amputation is the removal of a limb or part of a limb by a surgical procedure in order to save the life of a person. The underlying reasons behind the occurrence of this tragic incidence may be varied. However, irrespective of its cause limb loss is associated with wide range of life challenges. The study was done to investigate the psychological sequel of an individual after losing a limb and to know the level of strain and pressure they experience after this traumatic event. It also attempts to examine the moderating role of some demographic traits such as age, sex and cause of limb loss in psychosocial adjustment to amputation. Methods: The study includes 100 adult amputees of both genders and the data was collected from major government and private hospitals of Peshawar district. Demographic data sheet was constructed in order to know the demographics traits of amputees and a standardize Psychological Adjustment Scale developed by Sabir (1999) was used to find out the level of psychological adjustment after limb loss. Results: Nearly all the amputees' exhibit signs of psychological maladjustment at varying degrees. Males showed much greater signs of maladjustment than women and young adults were much psychologically shattered and disturbed as a result of limb loss. Amputation caused by planned medical reasons leads to less adjustment issues as compared to unplanned accidental amputation in which patient were not mentally prepare to accept this loss. Conclusion: Psychological aspect of amputation is an important aspect of limb loss which needs to be addressed properly in order to rehabilitate these patients and helps them to adjust successfully to their limb loss. (author)
Huh, In; Cheon, Woo Young; Choi, Woo Young
A subthreshold-swing-adjustable tunneling-field-effect-transistor-based random-access memory (SAT RAM) has been proposed and fabricated for low-power nonvolatile memory applications. The proposed SAT RAM cell demonstrates adjustable subthreshold swing (SS) depending on stored information: small SS in the erase state ("1" state) and large SS in the program state ("0" state). Thus, SAT RAM cells can achieve low read voltage (Vread) with a large memory window in addition to the effective suppression of ambipolar behavior. These unique features of the SAT RAM are originated from the locally stored charge, which modulates the tunneling barrier width (Wtun) of the source-to-channel tunneling junction.
Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.
Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for
Damman, O.C.; Stubbe, J.H.; Hendriks, M.; Arah, O.A.; Spreeuwenberg, P.; Delnoij, D.M.J.; Groenewegen, P.P.
Background: Ratings on the quality of healthcare from the consumer’s perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for
Custom bus routes need to be optimized to meet the needs of a customized bus for personalized trips of different passengers. This paper introduced a customized bus routing problem in which trips for each depot are given, and each bus stop has a fixed time window within which trips should be completed. Treating a trip as a virtual stop was the first consideration in solving the school bus routing problem (SBRP). Then, the mixed load custom bus routing model was established with a time window that satisfies its requirement and the result were solved by Cplex software. Finally, a simple network diagram with three depots, four pickup stops, and five delivery stops was structured to verify the correctness of the model, and based on the actual example, the result is that all the buses ran 124.42 kilometers, the sum of kilometers was 10.35 kilometers less than before. The paths and departure times of the different busses that were provided by the model were evaluated to meet the needs of the given conditions, thus providing valuable information for actual work. PMID:29320505
M. Gaspar, Raquel; Murgoci, Agatha
A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...
Lenzi, Jacopo; Avaldi, Vera Maria; Hernandez-Boussard, Tina; Descovich, Carlo; Castaldini, Ilaria; Urbinati, Stefano; Di Pasquale, Giuseppe; Rucci, Paola; Fantini, Maria Pia
Hospital discharge records (HDRs) are routinely used to assess outcomes of care and to compare hospital performance for heart failure. The advantages of using clinical data from medical charts to improve risk-adjustment models remain controversial. The aim of the present study was to evaluate the additional contribution of clinical variables to HDR-based 30-day mortality and readmission models in patients with heart failure. This retrospective observational study included all patients residing in the Local Healthcare Authority of Bologna (about 1 million inhabitants) who were discharged in 2012 from one of three hospitals in the area with a diagnosis of heart failure. For each study outcome, we compared the discrimination of the two risk-adjustment models (i.e., HDR-only model and HDR-clinical model) through the area under the ROC curve (AUC). A total of 1145 and 1025 patients were included in the mortality and readmission analyses, respectively. Adding clinical data significantly improved the discrimination of the mortality model (AUC = 0.84 vs. 0.73, p < 0.001), but not the discrimination of the readmission model (AUC = 0.65 vs. 0.63, p = 0.08). We identified clinical variables that significantly improved the discrimination of the HDR-only model for 30-day mortality following heart failure. By contrast, clinical variables made little contribution to the discrimination of the HDR-only model for 30-day readmission.
Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich
Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in
Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.
Telzer, Eva H; Yuen, Cynthia; Gonzales, Nancy; Fuligni, Andrew J
The acculturation gap-distress model purports that immigrant children acculturate faster than do their parents, resulting in an acculturation gap that leads to family and youth maladjustment. However, empirical support for the acculturation gap-distress model has been inconclusive. In the current study, 428 Mexican-American adolescents (50.2 % female) and their primary caregivers independently completed questionnaires assessing their levels of American and Mexican cultural orientation, family functioning, and youth adjustment. Contrary to the acculturation gap-distress model, acculturation gaps were not associated with poorer family or youth functioning. Rather, adolescents with higher levels of Mexican cultural orientations showed positive outcomes, regardless of their parents' orientations to either American or Mexican cultures. Findings suggest that youths' heritage cultural maintenance may be most important for their adjustment.
Full Text Available This paper develops a simple structuralist model to deal with the relationships between inflation and external adjustment policies in foreign-indebted economies facing strong reversals in capital inflows. Taking Latin American experiences in the eighties as a reference, the model attempts to systematize the so-called hypothesis of 'financially- based price formation', namely, the regime of price formation through which both real interest rates and inflation rates moved together upwards as a response to increasing pressures for transfer abroad of real resources as a means of debt servicing.
Full Text Available Since the launch in 2002 of the Gravity Recovery and Climate Experiment (GRACE satellites, several estimates of the mass balance of the Greenland ice sheet (GrIS have been produced. To obtain ice mass changes, the GRACE data need to be corrected for the effect of deformation changes of the Earth's crust. Recently, a new method has been proposed where ice mass changes and bedrock changes are simultaneously solved. Results show bedrock subsidence over almost the entirety of Greenland in combination with ice mass loss which is only half of the currently standing estimates. This subsidence can be an elastic response, but it may however also be a delayed response to past changes. In this study we test whether these subsidence patterns are consistent with ice dynamical modeling results. We use a 3-D ice sheet–bedrock model with a surface mass balance forcing based on a mass balance gradient approach to study the pattern and magnitude of bedrock changes in Greenland. Different mass balance forcings are used. Simulations since the Last Glacial Maximum yield a bedrock delay with respect to the mass balance forcing of nearly 3000 yr and an average uplift at present of 0.3 mm yr−1. The spatial pattern of bedrock changes shows a small central subsidence as well as more intense uplift in the south. These results are not compatible with the gravity based reconstructions showing a subsidence with a maximum in central Greenland, thereby questioning whether the claim of halving of the ice mass change is justified.
Teschl, Reinhard; Randeu, Walter; Teschl, Franz
Weather radar networks provide data with good spatial coverage and temporal resolution. Hence they are able to describe the variability of precipitation. Typical radar stations determine the rain rate for every square kilometre and make a full volume scan within about 5 minutes. A weakness however, is their often poor metering precision limiting the applicability of the radar for hydrological purposes. In contrast to rain gauges, which measure precipitation directly on the ground, the radar determines the reflectivity aloft and remote. Due to this principle, several sources of possible errors occur. Therefore improving the radar estimates of rainfall is still a vital topic in radar meteorology and hydrology. This paper presents data-driven approaches to improve radar estimates of rainfall by mapping radar reflectivity measurements Z to rain gauge data R. The analysis encompasses several input configurations and data-driven models. Reflectivity measurements at a constant altitude and the vertical profiles of reflectivity above a rain gauge are used as input parameters. The applied models are Artificial Neural Network (ANN), Model Tree (MT), and IBk a k-nearest-neighbour classifier. The relationship found between the data of a rain gauge and the reflectivity measurements is subsequently applied to another site with comparable terrain. Based on this independent dataset the performance of the data-driven models in the various input configurations is evaluated. For this study, rain gauge and radar data from the province of Styria, Austria, were available. The data sets extend over a two-year period (2001 and 2002). The available rain gauges use the tipping bucket principle with a resolution of 0.1 mm. Reflectivity measurements are obtained from the Doppler weather radar station on Mt. Zirbitzkogel (by courtesy of AustroControl GmbH). The designated radar is a high-resolution C-band weather-radar situated at an altitude of 2372 m above mean sea level. The data
Bernard, Thomas E; Ashley, Candi D; Garzon, Ximena P; Kim, Jung-Hyun; Coca, Aitor
Wet bulb globe temperature (WBGT) index is used by many professionals in combination with metabolic rate and clothing adjustments to assess whether a heat stress exposure is sustainable. The progressive heat stress protocol is a systematic method to prescribe a clothing adjustment value (CAV) from human wear trials, and it also provides an estimate of apparent total evaporative resistance (R e,T,a ). It is clear that there is a direct relationship between the two descriptors of clothing thermal effects with diminishing increases in CAV at high R e,T,a . There were data to suggest an interaction of CAV and R e,T,a with relative humidity at high evaporative resistance. Because human trials are expensive, manikin data can reduce the cost by considering the static total evaporative resistance (R e,T,s ). In fact, as the static evaporative resistance increases, the CAV increases in a similar fashion as R e,T,a . While the results look promising that R e,T,s can predict CAV, some validation remains, especially for high evaporative resistance. The data only supports air velocities near 0.5 m/s.
Pereira, A; Ichihara, S; Collon, S; Bodin, F; Gay, A; Facca, S; Liverneaux, P
The aim of this study was to establish the feasibility of microsurgical end-to-side vascular anastomosis with a multiclamp adjustable vascular clamp prototype in an inert experimental model. Our method consisted of performing an end-to-side microsurgical anastomosis with 10/0 suture on a 2-mm diameter segment. In group 1, the end-to-side segment was held in place by a double clamp and a single end clamp. In group 2, the segment was held in place with a single multiclamp adjustable clamp. The average time for performing the anastomosis was shorter in group 2. The average number of sutures was the same in both groups. No leak was found and permeability was always positive in both groups. Our results show that performing end-to-side anastomosis with a multiclamp adjustable vascular clamp is feasible in an inert experimental model. Feasibility in a live animal model has to be demonstrated before clinical use. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Chen, Sylvia Xiaohua; Benet-Martínez, Verónica; Harris Bond, Michael
The present investigation examined the impact of bicultural identity, bilingualism, and social context on the psychological adjustment of multicultural individuals. Our studies targeted three distinct types of biculturals: Mainland Chinese immigrants in Hong Kong, Filipino domestic workers (i.e., sojourners) in Hong Kong, and Hong Kong and Mainland Chinese college students. Individual differences in Bicultural Identity Integration (BII; Benet-Martínez, Leu, Lee, & Morris, 2002) positively predicted psychological adjustment for all the samples except sojourners even after controlling for the personality traits of neuroticism and self-efficacy. Cultural identification and language abilities also predicted adjustment, although these associations varied across the samples in meaningful ways. We concluded that, in the process of managing multiple cultural environments and group loyalties, bilingual competence, and perceiving one's two cultural identities as integrated are important antecedents of beneficial psychological outcomes.
Xiaobin Dong; Yufang Zhang; Weijia Cui; Bin Xun; Baohua Yu; Sergio Ulgiati; Xinshi Zhang
The emergy concept, integrated with a multi-objective linear programming method, was used to model the agricultural structure of Xinjiang Uygur Autonomous Region under the consideration of the need to develop a low-carbon economy. The emergy indices before and after the structural optimization were evaluated. In the reconstructed model, the proportions of agriculture, forestry and artificial grassland should be adjusted from 19:2:1 to 5.2:1:2.5; the Emergy Yield Ratio (1.48) was higher than t...
Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.
Hussein, A.Z.; Amin, E.S.; Ibrahim, M.S.
Since the discovery of x-ray, it is use in examination has become an integral part of medical diagnostic radiology. The use of X-ray is harmful to human beings but recent technological advances and regulatory constraints have made the medical Xray much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict x-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.
Hussein, A.Z.; Amin, E.S.; Ibrahim, M.S.
Since the discovery of X-rays, their use in examination has become an integral part of medical diagnostic radiology. The use of X-rays is harmful to human beings but recent technological advances and regulatory constraints have made the medical X-rays much safer than they were at the beginning of the 20th century. However, the potential benefits of the engineered safety features can not be fully realized unless the operators are aware of these safety features. The aim of this work is to adjust and predict X-ray machine factors (current and voltage) using neural artificial network in order to obtain effective dose within the range of dose limitation system and assure radiological safety.
Saputra, A.; Sukono; Rusyaman, E.
In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.
Zhang, Yufeng; Liu, Kai; Li, Wei; Xue, Qian; Hong, Jiang; Xu, Jibin; Wu, Lihui; Ji, Guangyu; Sheng, Jihong; Wang, Zhinong
To investigate the safety and efficacy of an adjusted regimen of heparin infusion in cardiopulmonary bypass (CPB) surgery in a Chinese population. Prospective, single-center, observational study. University teaching hospital. Patients having cardiac surgery with CPB were selected for this study using the following criteria: 18 to 75 years of age, undergoing first-time cardiac surgery with conventional median sternotomy, aortic clamping time between 40 and 120 minutes, and preoperative routine blood tests showing normal liver, renal, and coagulation functions. The exclusion criteria include salvage cases, a history of coagulopathy in the family, and long-term use of anticoagulation or antiplatelet drugs. Sixty patients were divided randomly into a control group (n = 30) receiving a traditional heparin regimen and an experimental group (n = 30) receiving an adjusted regimen. Activated coagulation time (ACT) was monitored at different time points, ACT>480 seconds was set as the safety threshold of CPB. Heparin doses (initial dose, added dose, and total dose), protamine doses (initial dose, added dose, and total dose), CPB time, aortic clamping time, assisted circulation time, sternal closure time, blood transfusion volume, and drainage volume 24 hours after surgery were recorded. There was no significant difference in achieving target ACT after the initial dose of heparin between the 2 groups; CPB time, aortic clamping time, assisted circulation time, postoperative complication rate, and drainage volume between the 2 groups were not significantly different (p>0.05). However, initial and total dosage of heparin, initial and total dosage of protamine, sternal closure time, and intraoperative blood transfusion volume in the experimental group were significantly lower (pChinese CPB patients, which might reduce the initial and total dosage of heparin and protamine as well as sternal closure time and intraoperative blood transfusion volume. Copyright © 2016 Elsevier Inc
The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
Examining children’s physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children’s cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol responses, consistent with both a sensitization and an attenuation hypothesis. Child-rearing disagreements and perceived threat were associated with children exhibiting a rising cortisol pattern whereas destructive conflict was related to children displaying a flat pattern. Physiologically rising patterns were also linked with emotional insecurity and internalizing and externalizing behaviors. Results supported a sensitization pattern of responses as maladaptive for children in response to marital conflict with evidence also linking an attenuation pattern with risk. The present study supports children’s adrenocortical functioning as one mechanism through which interparental conflict is related to children’s coping responses and psychological adjustment. PMID:22545835
... internal model may use any generally accepted measurement techniques, such as variance-covariance models... volatility and less than perfect correlation of rates along the yield curve. (d) Quantitative requirements... rates or prices. A bank with a large or complex options portfolio must measure the volatility of options...
Tynes, Brendesha M; Rose, Chad A; Hiss, Sophia; Umaña-Taylor, Adriana J; Mitchell, Kimberly; Williams, David
Given the recent rise in online hate activity and the increased amount of time adolescents spend with media, more research is needed on their experiences with racial discrimination in virtual environments. This cross-sectional study examines the association between amount of time spent online, traditional and online racial discrimination and adolescent adjustment, including depressive symptoms, anxiety and externalizing behaviors. The study also explores the role that social identities, including race and gender, play in these associations. Online surveys were administered to 627 sixth through twelfth graders in K-8, middle and high schools. Multiple regression results revealed that discrimination online was associated with all three outcome variables. Additionally, a significant interaction between online discrimination by time online was found for externalizing behaviors indicating that increased time online and higher levels of online discrimination are associated with more problem behavior. This study highlights the need for clinicians, educational professionals and researchers to attend to race-related experiences online as well as in traditional environments.
.... This paper briefly explores project management principles, leadership theory, and commercial business practices, suggesting improvements to the Army's modeling and simulation development process...
Muljadi, E.; Singh, M.; Gevorgian, V.; Mohanpurkar, M.; Havsapian, R.; Koritarov, V.
Hydropower is the largest producer of renewable energy in the U.S. More than 60% of the total renewable generation comes from hydropower. There is also approximately 22 GW of pumped storage hydropower (PSH). Conventional PSH uses a synchronous generator, and thus the rotational speed is constant at synchronous speed. This work details a hydrodynamic model and generator/power converter dynamic model. The optimization of the hydrodynamic model is executed by the hydro-turbine controller, and the electrical output real/reactive power is controlled by the power converter. All essential controllers to perform grid-interface functions and provide ancillary services are included in the model.
Vadstrup, Casper; Wang, Xiongfei; Blaabjerg, Frede
the LC filter with a higher cut off frequency and without damping resistors. The selection of inductance and capacitance is chosen based on capacitor voltage ripple and current ripple. The filter adds a base load to the inverter, which increases the inverter losses. It is shown how the modulation index...
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
This article proposes a multilevel model for the assessment of school effectiveness where the intake achievement is a predictor and the response variable is the achievement in the subsequent periods. The achievement is a latent variable that can be estimated on the basis of an item response theory model and hence subject to measurement error.…
van der Wal, W.; Barnhoorn, A.; Stocchi, P.; Gradmann, S.; Wu, P.; Drury, M.; Vermeersen, B.
We find that sea level data can be explained with our ice model and with information on mantle rheology from laboratory experiments, heatflow and seismology and a pure olivine rheology above 400 km. Moreover, laterally heterogeneous models provide a significantly better fit to relative sea level
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Full Text Available Abstract Background The main objective of this study is to measure the relationship between morbidity, direct health care costs and the degree of clinical effectiveness (resolution of health centres and health professionals by the retrospective application of Adjusted Clinical Groups in a Spanish population setting. The secondary objectives are to determine the factors determining inadequate correlations and the opinion of health professionals on these instruments. Methods/Design We will carry out a multi-centre, retrospective study using patient records from 15 primary health care centres and population data bases. The main measurements will be: general variables (age and sex, centre, service [family medicine, paediatrics], and medical unit, dependent variables (mean number of visits, episodes and direct costs, co-morbidity (Johns Hopkins University Adjusted Clinical Groups Case-Mix System and effectiveness. The totality of centres/patients will be considered as the standard for comparison. The efficiency index for visits, tests (laboratory, radiology, others, referrals, pharmaceutical prescriptions and total will be calculated as the ratio: observed variables/variables expected by indirect standardization. The model of cost/patient/year will differentiate fixed/semi-fixed (visits costs of the variables for each patient attended/year (N = 350,000 inhabitants. The mean relative weights of the cost of care will be obtained. The effectiveness will be measured using a set of 50 indicators of process, efficiency and/or health results, and an adjusted synthetic index will be constructed (method: percentile 50. The correlation between the efficiency (relative-weights and synthetic (by centre and physician indices will be established using the coefficient of determination. The opinion/degree of acceptance of physicians (N = 1,000 will be measured using a structured questionnaire including various dimensions. Statistical analysis: multiple regression
Vidal‐Petiot, Emmanuelle; Moranne, Olivier; Mariat, Christophe; Boffa, Jean‐Jacques; Vrtovsnik, François; Scheen, André‐Jean; Krzesinski, Jean‐Marie; Flamant, Martin; Delanaye, Pierre
Aim For drug dosing adaptation, the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines recommend using estimated glomerular filtration rate (eGFR) by the Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) equation, after ‘de‐indexation’ by body surface area (BSA). In pharmacology, the Cockcroft–Gault (CG) equation is still recommended to adapt drug dosage. In the context of obesity, adjusted ideal body weight (AIBW) is sometimes preferred to actual body weight (ABW) for the CG equation. The aim of the present study was to compare the performance of the different GFR‐estimating equations, non‐indexed or de‐indexed by BSA for the purpose of drug‐dosage adaptation in obese patients. Methods We analysed data from patients with a body mass index (BMI) higher than 30 kg m−2 who underwent a GFR measurement. eGFR was calculated using the CKD‐EPI and Modification of Diet in Renal Disease (MDRD) equations, de‐indexed by BSA, and the CG equation, using either ABW, AIBW or lean body weight (LBW) for the weight variable and compared with measured GFR, expressed in ml min−1. Results In our population of obese patients, use of the AIBW instead of the ABW in the CG equation, markedly improved the overall accuracy of this equation [57% for CGABW and 79% for CGAIBW (P equations is no different when using LBW than when using AIBW. The MDRD and CKD‐EPI equations de‐indexed by the BSA also performed well, with an overall higher accuracy for the MDRD de‐indexed equation [(80% and 76%, respectively (P equation appeared to be the most suitable for estimating the non‐indexed GFR for the purpose of drug dosage adaptation in obese patients. PMID:26531818
... Results Chiropractic adjustment can be effective in treating low back pain, although much of the research done shows only a modest benefit — similar to the results of more conventional treatments. Some studies suggest that spinal manipulation also may ...
... How you prepare No special preparation is required before a chiropractic adjustment. Chiropractic treatment may require a series of visits to your chiropractor. Ask your care provider about the frequency of visits and be ...
Borup, Morten; Grum, M.; Mikkelsen, Peter Steen
In many urban runoff systems infiltrating water contributes with a substantial part of the total inflow and therefore most urban runoff modelling packages include hydrological models for simulating the infiltrating inflow. This paper presents a method for deterministic updating of the hydrological....... This information is then used to update the states of the hydrological model. The method is demonstrated on the 20 km2 Danish urban catchment of Ballerup, which has substantial amount of infiltration inflow after succeeding rain events, for a very rainy period of 17 days in August 2010. The results show big...
Brehler, Michael; Görres, Joseph; Wolf, Ivo; Franke, Jochen; von Recum, Jan; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana
Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.
Hortigóna, B.; Gallardo, J.M.; Nieto-García, E.J.; López, J.A.
The elastoplastic behaviour of steel used for structural member fabrication has received attention to facilitate a mechanical-resistant design. New Zealand and South African standards have adopted various theoretical approaches to describe such behaviour in stainless steels. With respect to the building industry, describing the tensile behaviour of steel rebar used to produce reinforced concrete structures is of interest. Differences compared with the homogenous material described in the above mentioned standards and related literatures are discussed in this paper. Specifically, the presence of ribs and the TEMPCORE® technology used to produce carbon steel rebar may alter the elastoplastic model. Carbon steel rebar is shown to fit a Hollomon model giving hardening exponent values on the order of 0.17. Austenitic stainless steel rebar behaviour is better described using a modified Rasmussen model with a free fitted exponent of 6. Duplex stainless steel shows a poor fit to any previous model. [es
Yan, Ying; Yi, Grace Y
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional
The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...
A. S. Aleynik
Full Text Available The paper is focused on the investigation of fiber-optic interferometric sensor based on the array of fiber Bragg gratings. Reflection spectra displacement mechanism of the fiber Bragg gratings under the external temperature effects and the static pressure is described. The experiment has shown that reflection spectra displacement of Bragg gratings reduces the visibility of the interference pattern. A method of center wavelength adjustment is proposed for the optical radiation source in accord ance with the current Bragg gratings reflection spectra based on the impulse relative modulation of control signal for the Peltier element controller. The semiconductor vertical-cavity surface-emitting laser controlled by a pump driver is used as a light source. The method is implemented by the Peltier element controller regulating and stabilizing the light source temperature, and a programmable logic-integrated circuit monitoring the Peltier element controller. The experiment has proved that the proposed method rendered possible to regulate the light source temperature at a pitch of 0.05 K and adjust the optical radiation source center wavelength at a pitch of 0.05 nm. Experimental results have revealed that the central wavelength of the radiation adjustment at a pitch of 0.005 nm gives the possibility for the capacity of the array consisting of four opticalfiber sensors based on the fiber Bragg gratings. They are formed in one optical fiber under the Bragg grating temperature change from 0° C to 300° C and by the optical fiber mechanical stretching by the force up to 2 N.
Dolin, R.M.; Hefele, J.
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Sobes, Vladimir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arbanas, Goran [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
The purpose of this project is to couple differential and integral data evaluation in a continuous-energy framework. More specifically, the goal is to use the Generalized Linear Least Squares methodology employed in TSURFER to update the parameters of a resolved resonance region evaluation directly. Recognizing that the GLLS methodology in TSURFER is identical to the mathematical description of the simple Bayesian updating carried out in SAMMY, the computer code SAMINT was created to help use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Minimal modifications of SAMMY are required when used with SAMINT to make resonance parameter updates based on integral experimental data.
Full Text Available We present and characterize an original experimental model to create a chronic ischemic heart failure in pig. Two ameroid constrictors were placed around the LAD and the circumflex artery. Two months after surgery, pigs presented a poor LV function associated with a severe mitral valve insufficiency. Echocardiography analysis showed substantial anomalies in radial and circumferential deformations, both on the anterior and lateral surface of the heart. These anomalies in function were coupled with anomalies of perfusion observed in echocardiography after injection of contrast medium. No demonstration of myocardial infarction was observed with histological analysis. Our findings suggest that we were able to create and to stabilize a chronic ischemic heart failure model in the pig. This model represents a useful tool for the development of new medical or surgical treatment in this field.
This study examines the international risks faced by multinational enterprises to understand their impact on the evaluation of investment projects. Moreover, it establishes a 'three-dimensional' theoretical framework of risk identification to analyse the composition of international risk indicators of multinational enterprises based on the theory…
Soltani, Hamid; Davari, Pooya; Blaabjerg, Frede
attractive due to its improved harmonic performance compared to a conventional ASD. In this digest, the input currents of the EI-based ASD are investigated and compared with the conventional ASDs with respect to interharmonics, which is an emerging power quality topic. First, the main causes...
Doo Yong Choi; Seong-Won Kim; Min-Ah Choi; Zong Woo Geem
Rapid detection of bursts and leaks in water distribution systems (WDSs) can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA) systems and the establishment of district meter areas (DMAs). Nonetheless, no consideration has been given to how frequen...
Terpstra, Teun; Lindell, Michael K.
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
Gryczka, Oliver; Heinrich, Stefan; Deen, N.G.; van Sint Annaland, M.; Kuipers, J.A.M.; Mörl, Lothar
Since the invention of the spouted bed technology by Mathur and Gishler (1955), different kinds of apparatus design were developed and a huge number of applications in nearly all branches of industry have emerged. Modeling of spouted beds by means of modern simulation tools, like discrete particle
Engsted, Tom; Haldrup, Niels
Der udvikles en ny metode til estimation og test af den lineære kvadratiske tilpasningsomkostningsmodel når de underliggende tidsserier er ikke-stationære, og metoden anvendes til modellering af arbejdskraftefterspørgslen i danske industrisektorer....
W. J. Massman; J. M. Forthofer; M. A. Finney
The ability to rapidly estimate wind speed beneath a forest canopy or near the ground surface in any vegetation is critical to practical wildland fire behavior models. The common metric of this wind speed is the "mid-flame" wind speed, UMF. However, the existing approach for estimating UMF has some significant shortcomings. These include the assumptions that...
Bounadja, E.; Djahbar, A.; Taleb, R.; Boudjema, Z.
The control of Doubly-Fed induction generator (DFIG), used in wind energy conversion, has been given a great deal of interest. Frequently, this control has been dealt with ignoring the magnetic saturation effect in the DFIG model. The aim of the present work is twofold: firstly, the magnetic saturation effect is accounted in the control design model; secondly, a new second order sliding mode control scheme using adjustable-gains (AG-SOSMC) is proposed to control the DFIG via its rotor side converter. This scheme allows the independent control of the generated active and reactive power. Conventionally, the second order sliding mode control (SOSMC) applied to the DFIG, utilize the super-twisting algorithm with fixed gains. In the proposed AG-SOSMC, a simple means by which the controller can adjust its behavior is used. For that, a linear function is used to represent the variation in gain as a function of the absolute value of the discrepancy between the reference rotor current and its measured value. The transient DFIG speed response using the aforementioned characteristic is compared with the one determined by using the conventional SOSMC controller with fixed gains. Simulation results show, accurate dynamic performances, quicker transient response and more accurate control are achieved for different operating conditions.
In photosynthetic organisms, control of light-harvesting is a key component of acclimation mechanisms that optimize photon conversion efficiencies. In this thesis, the interrelation of short- and long-term regulation of light-harvesting at photosystem II (PSII) was analyzed in the green alga Chlamydomonas reinhardtii. This model organism is able to gain carbon and energy through photosynthetic carbon dioxide fixation as well as heterotrophic feeding. A lowered inorganic or increased organic c...
Ding, Lili; Kurowski, Brad G; He, Hua; Alexander, Eileen S; Mersha, Tesfaye B; Fardo, David W; Zhang, Xue; Pilipenko, Valentina V; Kottyan, Leah; Martin, Lisa J
Genetic studies often collect data on multiple traits. Most genetic association analyses, however, consider traits separately and ignore potential correlation among traits, partially because of difficulties in statistical modeling of multivariate outcomes. When multiple traits are measured in a pedigree longitudinally, additional challenges arise because in addition to correlation between traits, a trait is often correlated with its own measures over time and with measurements of other family...
Brunelli, Alessandro; Salati, Michele; Refai, Majed; Xiumé, Francesco; Rocco, Gaetano; Sabbatini, Armando
The objectives of this study were to develop a risk-adjusted model to estimate individual postoperative costs after major lung resection and to use it for internal economic audit. Variable and fixed hospital costs were collected for 679 consecutive patients who underwent major lung resection from January 2000 through October 2006 at our unit. Several preoperative variables were used to develop a risk-adjusted econometric model from all patients operated on during the period 2000 through 2003 by a stepwise multiple regression analysis (validated by bootstrap). The model was then used to estimate the postoperative costs in the patients operated on during the 3 subsequent periods (years 2004, 2005, and 2006). Observed and predicted costs were then compared within each period by the Wilcoxon signed rank test. Multiple regression and bootstrap analysis yielded the following model predicting postoperative cost: 11,078 + 1340.3X (age > 70 years) + 1927.8X cardiac comorbidity - 95X ppoFEV1%. No differences between predicted and observed costs were noted in the first 2 periods analyzed (year 2004, $6188.40 vs $6241.40, P = .3; year 2005, $6308.60 vs $6483.60, P = .4), whereas in the most recent period (2006) observed costs were significantly lower than the predicted ones ($3457.30 vs $6162.70, P < .0001). Greater precision in predicting outcome and costs after therapy may assist clinicians in the optimization of clinical pathways and allocation of resources. Our economic model may be used as a methodologic template for economic audit in our specialty and complement more traditional outcome measures in the assessment of performance.
D. U. Campos-Delgado
Full Text Available A self-tuning algorithm is presented for on-line insulin dosage adjustment in type 1 diabetic patients (chronic stage. The algorithm suggested does not need information of the patient insulin–glucose dynamics (model-free. Three doses are programmed daily, where a combination of two types of insulin: rapid/short and intermediate/long acting is injected into the patient through a subcutaneous route. The doses adaptation is performed by reducing the error in the blood glucose level from euglycemics. In this way, a total of five doses are tuned per day: three rapid/short and two intermediate/long, where there is large penalty to avoid hypoglycemic scenarios. Closed-loop simulation results are illustrated using a detailed nonlinear model of the subcutaneous insulin–glucose dynamics in a type 1 diabetic patient with meal intake.
Kwang Cheol Shin
Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.
Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik
In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm. PMID:22399942
Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik
In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.
Full Text Available Internet protocol TV (IPTV is a promising Internet killer application, which integrates video, voice, and data onto a single IP network, and offers viewers an innovative set of choices and control over their TV content. To provide high-quality IPTV services, an effective strategy is based on caching. This work proposes a segment-based two-tier caching approach, which divides each video into multiple segments to be cached. This approach also partitions the cache space into two layers, where the first layer mainly caches to-be-played segments and the second layer saves possibly played segments. As the segment access becomes frequent, the proposed approach enlarges the first layer and reduces the second layer, and vice versa. Because requested segments may not be accessed frequently, this work further designs an admission control mechanism to determine whether an incoming segment should be cached or not. The cache architecture takes forward/stop playback into account and may replace the unused segments under the interrupted playback. Finally, we conduct comprehensive simulation experiments to evaluate the performance of the proposed approach. The results show that our approach can yield higher hit ratio than previous work under various environmental parameters.
Hansen, Kristian Schultz; Østerdal, Lars Peter Raahave
time tradeoff (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. The second part of the paper discusses four issues recurrently debated in the literature. This discussion includes questioning...... the SG method as the gold standard for estimation of the health state index, reexamining the role of the constantproportional tradeoff condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination...
Hansen, Kristian Schultz; Østerdal, Lars Peter
time trade-off (TTO) and standard gamble (SG) scores. We investigate deterministic and probabilistic models and consider five different families of discounting functions in all. This discussion includes questioning the SG method as the gold standard of the health state index, re-examining the role...... of the constant-proportional trade-off condition, revisiting the problem of double discounting of QALYs, and suggesting that it is not a matter of choosing between TTO and SG procedures as the combination of these two can be used to disentangle risk aversion from discounting. We find that caution must be taken...
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....
Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin
The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...
Ivins, E. R.; Seroussi, H. L.; Wiens, D.; Larour, E. Y.
Alkaline basalts of the Marie Byrd Land (MBL) have been interpreted as evidence of a mantle plume impinging on the lithosphere from below at about 85-80 Ma and again at 30-20 Ma. Because of the lack of structural and stratigraphic mapping due to ice sheet cover, and even a general lack of sufficient bottom topography, it is impossible to identify and classify the main characteristics of such a putative plume with respect to ones that are well-studied, such as the Yellowstone or Raton hotspots. Recent POLENET seismic mapping has identified possible plume structures that could extend across the upper mantle beneath the Ruppert Coast (RC) in southeast MBL, and possible plume beneath the Bentley Subglacial Trench (BST), some 1000 km to the southwest of RC, and on the opposite side of MBL. Mapping of subglacial lakes via altimetry allows reconstruction of basal conditions that are consistent with melt generation rates and patterns of basal water routing. We extensively model the hotspot heat flux caused by a plume buried beneath the crust of the West Antarctic Ice Sheet (WAIS) and employing set of 3-D thermomechanical Stokes flow simulations with the Ice Sheet System Model (ISSM). We discover that a mantle upwelling structure beneath the BST, upstream of Subglacial Lake Whillans (SLW) and Whillans Ice Stream is compatible when the peak plume-related geothermal heat flux, qGHF, approaches 200 mW/m^2, rather consistent with heat flux measurements at the WISSARD core site where heat flux probes penetrated into sediments of SLW. For a plume at RC the ISSM predictions do allow a plume, consistent with seismic mapping, but require the peak plume flux to be upper bound by qGHF ≤ 150 mW/m^2. New maps of the relatively slower upper mantle shear wave velocity beneath WAIS reveal that the slowest velocity corresponds to mantle below MLB. Using our new constraints on a 3-D plume interpretation of this slowness, we determine the perturbations to GIA modeling that are required to
Balakin, P. D.; Belkov, V. N.; Shtripling, L. O.
Full application of the available power and stationary mode preservation for the power station (engine) operation of the transport machine under the conditions of variable external loading, are topical issues. The issues solution is possible by means of mechanical drives with the autovaried rate transfer function and nonholonomic constraint of the main driving mediums. Additional to the main motion, controlled motion of the driving mediums is formed by a variable part of the transformed power flow and is implemented by the integrated control loop, functioning only on the basis of the laws of motion. The mathematical model of the mechanical autovariator operation is developed using Gibbs function, acceleration energy; the study results are presented; on their basis, the design calculations of the autovariator driving mediums and constraints, including its automatic control loop, are possible.
Luciana Spica Almilia
Full Text Available This study examines the effect overconfidence and experience on increasing or reducing the information order effect in investment decision making. Subject criteria in this research are: professional investor (who having knowledge and experience in the field of investment and stock market and nonprofessional investor (who having knowledge in the field of investment and stock market. Based on the subject criteria, then subjects in this research include: accounting students, capital market and investor. This research is using experimental method of 2 x 2 (between subjects. The researcher in conducting this experimental research is using web based. The characteristic of individual (high confidence and low confidence is measured by calibration test. Independent variable used in this research consist of 2 active independent variables (manipulated which are as the followings: (1 Pattern of information presentation (step by step and end of sequence; and (2 Presentation order (good news – bad news or bad news – good news. Dependent variable in this research is a revision of investment decision done by research subject. Participants in this study were 78 nonprofessional investor and 48 professional investors. The research result is consistent with that predicted that individuals who have a high level of confidence that will tend to ignore the information available, the impact on individuals with a high level of confidence will be spared from the effects of the information sequence.
Paulo C. Coradi
Full Text Available ABSTRACT The aim of this study was to evaluate the influence of the initial moisture content of soybeans and the drying air temperatures on drying kinetics and grain quality, and find the best mathematical model that fit the experimental data of drying, effective diffusivity and isosteric heat of desorption. The experimental design was completely randomized (CRD, with a factorial scheme (4 x 2, four drying temperatures (75, 90, 105 and 120 ºC and two initial moisture contents (25 and 19% d.b., with three replicates. The initial moisture content of the product interferes with the drying time. The model of Wang and Singh proved to be more suitable to describe the drying of soybeans to temperature ranges of the drying air of 75, 90, 105 and 120 °C and initial moisture contents of 19 and 25% (d.b.. The effective diffusivity obtained from the drying of soybeans was higher (2.5 x 10-11 m2 s-1 for a temperature of 120 °C and water content of 25% (d.b.. Drying of soybeans at higher temperatures (above 105 °C and higher initial water content (25% d.b. also increases the amount of energy (3894.57 kJ kg-1, i.e., the isosteric heat of desorption necessary to perform the process. Drying air temperature and different initial moisture contents affected the quality of soybean along the drying time (electrical conductivity of 540.35 µS cm-1g-1; however, not affect the final yield of the oil extracted from soybean grains (15.69%.
Barkhouse, K L; Van Vleck, L D; Cundiff, L V; Buchanan, D S; Marshall, D M
Records on growth traits were obtained from five Midwestern agricultural experiment stations as part of a beef cattle crossbreeding project (NC-196). Records on birth weight (BWT, n =3,490), weaning weight (WWT, n = 3,237), and yearling weight (YWT, n = 1,372) were analyzed within locations and pooled across locations to obtain estimates of breed of sire differences. Solutions for breed of sire differences were adjusted to the common base year of 1993. Then, factors to use with within-breed expected progeny differences (EPD) to obtain across-breed EPD were calculated. These factors were compared with factors obtained from similar analyses of records from the U. S. Meat Animal Research Center (MARC). Progeny of Brahman sires mated to Bos taurus cows were heaviest at birth and among the lightest at weaning. Simmental and Gelbvieh sires produced the heaviest progeny at weaning. Estimates of heritability pooled across locations were .34, .19, and .07 for BWT, WWT, and YWT, respectively. Regression coefficients of progeny performance on EPD of sire were 1.25+/-.09, .98+/-.13, and .62+/-.18 for BWT, WWT, and YWT, respectively. Rankings of breeds of sire generally did not change when adjusted for sire sampling. Rankings were generally similar to those previously reported for MARC data, except for Limousin and Charolais sires, which ranked lower for BWT and WWT at NC-196 locations than at MARC. Adjustment factors used to obtain across-breed EPD were largest for Brahman for BWT and for Gelbvieh for WWT. The data for YWT allow only comparison of Angus with Simmental and of Gelbvieh with Limousin.
Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.
Harrison, Sean; Tilling, Kate; Turner, Emma L; Lane, J Athene; Simpkin, Andrew; Davis, Michael; Donovan, Jenny; Hamdy, Freddie C; Neal, David E; Martin, Richard M
Previous studies indicate a possible inverse relationship between prostate-specific antigen (PSA) and body mass index (BMI), and a positive relationship between PSA and age. We investigated the associations between age, BMI, PSA, and screen-detected prostate cancer to determine whether an age-BMI-adjusted PSA model would be clinically useful for detecting prostate cancer. Cross-sectional analysis nested within the UK ProtecT trial of treatments for localized cancer. Of 18,238 men aged 50-69 years, 9,457 men without screen-detected prostate cancer (controls) and 1,836 men with prostate cancer (cases) met inclusion criteria: no history of prostate cancer or diabetes; PSA PSA, age, and BMI in all men, controlling for prostate cancer status. In the 11,293 included men, the median PSA was 1.2 ng/ml (IQR: 0.7-2.6); mean age 61.7 years (SD 4.9); and mean BMI 26.8 kg/m 2 (SD 3.7). There were a 5.1% decrease in PSA per 5 kg/m 2 increase in BMI (95% CI 3.4-6.8) and a 13.6% increase in PSA per 5-year increase in age (95% CI 12.0-15.1). Interaction tests showed no evidence for different associations between age, BMI, and PSA in men above and below 3.0 ng/ml (all p for interaction >0.2). The age-BMI-adjusted PSA model performed as well as an age-adjusted model based on National Institute for Health and Care Excellence (NICE) guidelines at detecting prostate cancer. Age and BMI were associated with small changes in PSA. An age-BMI-adjusted PSA model is no more clinically useful for detecting prostate cancer than current NICE guidelines. Future studies looking at the effect of different variables on PSA, independent of their effect on prostate cancer, may improve the discrimination of PSA for prostate cancer.
Fuller, Theodore K.; Venditti, Jeremy G.; Nelson, Peter A.; Palen, Wendy J.
Disruptions to sediment supply continuity caused by run-of-river (RoR) hydropower development have the potential to cause downstream changes in surface sediment grain size which can influence the productivity of salmon habitat. The most common approach to understanding the impacts of RoR hydropower is to study channel changes in the years following project development, but by then, any impacts are manifest and difficult to reverse. Here we use a more proactive approach, focused on predicting impacts in the project planning stage. We use a one-dimensional morphodynamic model to test the hypothesis that the greatest risk of geomorphic change and impact to salmon habitat from a temporary sediment supply disruption exists where predevelopment sediment supply is high and project design creates substantial sediment storage volume. We focus on the potential impacts in the reach downstream of a powerhouse for a range of development scenarios that are typical of projects developed in the Pacific Northwest and British Columbia. Results indicate that increases in the median bed surface size (D50) are minor if development occurs on low sediment supply streams (<1 mm for supply rates 1 × 10-5 m2 s-1 or lower), and substantial for development on high sediment supply streams (8-30 mm for supply rates between 5.5 × 10-4 and 1 × 10-3 m2 s-1). However, high sediment supply streams recover rapidly to the predevelopment surface D50 (˜1 year) if sediment supply can be reestablished.
Z. H. Yang
Full Text Available Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels.
In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566
In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566
Carlson, R.W.; Covic, J.; Leininger, G.
In a rotating fan beam tomographic scanner there is included an adjustable collimator and shutter assembly. The assembly includes a fan angle collimation cylinder having a plurality of different length slots through which the beam may pass for adjusting the fan angle of the beam. It also includes a beam thickness cylinder having a plurality of slots of different widths for adjusting the thickness of the beam. Further, some of the slots have filter materials mounted therein so that the operator may select from a plurality of filters. Also disclosed is a servo motor system which allows the operator to select the desired fan angle, beam thickness and filter from a remote location. An additional feature is a failsafe shutter assembly which includes a spring biased shutter cylinder mounted in the collimation cylinders. The servo motor control circuit checks several system conditions before the shutter is rendered openable. Further, the circuit cuts off the radiation if the shutter fails to open or close properly. A still further feature is a reference radiation intensity monitor which includes a tuning-fork shaped light conducting element having a scintillation crystal mounted on each tine. The monitor is placed adjacent the collimator between it and the source with the pair of crystals to either side of the fan beam
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
Huanqing Cui; Xuemin Du; Juan Wang; Tianhong Tang; Tianzhun Wu
Hydrogel-based shape-adjustable films were successfully fabricated via grafting poly(N-isopropylacrylamide) (PNIPAM) onto one side of polyimide (PI) films. The prepared PI-g-PNIPAM films exhibited rapid, reversible, and repeatable bending/unbending property by heating to near-human-body temperature (37 °C) or cooling to 25 °C. The excellent property of PI-g-PNIPAM films resulted from a lower critical solution temperature (LCST) of PNIPAM at about 32 °C. Varying the thickness of PNIPAM hydrogel layer regulated the thermo-responsive shape bending degree and response speed of PI-g-PNIPAM films. The thermo-induced shrinkage of hydrogel layers can tune the curvature of PI films, which have potential applications in the field of wearable and implantable devices.
Kascholke, Christian; Hendrikx, Stephan; Flath, Tobias; Kuzmenka, Dzmitry; Dörfler, Hans-Martin; Schumann, Dirk; Gressenbuch, Mathias; Schulze, F Peter; Schulz-Siegmund, Michaela; Hacker, Michael C
Biodegradability is a crucial characteristic to improve the clinical potential of sol-gel-derived glass materials. To this end, a set of degradable organic/inorganic class II hybrids from a tetraethoxysilane(TEOS)-derived silica sol and oligovalent cross-linker oligomers containing oligo(d,l-lactide) domains was developed and characterized. A series of 18 oligomers (Mn: 1100-3200Da) with different degrees of ethoxylation and varying length of oligoester units was established and chemical composition was determined. Applicability of an established indirect rapid prototyping method enabled fabrication of a total of 85 different hybrid scaffold formulations from 3-isocyanatopropyltriethoxysilane-functionalized macromers. In vitro degradation was analyzed over 12months and a continuous linear weight loss (0.2-0.5wt%/d) combined with only moderate material swelling was detected which was controlled by oligo(lactide) content and matrix hydrophilicity. Compressive strength (2-30MPa) and compressive modulus (44-716MPa) were determined and total content, oligo(ethylene oxide) content, oligo(lactide) content and molecular weight of the oligomeric cross-linkers as well as material porosity were identified as the main factors determining hybrid mechanics. Cytocompatibility was assessed by cell culture experiments with human adipose tissue-derived stem cells (hASC). Cell migration into the entire scaffold pore network was indicated and continuous proliferation over 14days was found. ALP activity linearly increased over 2weeks indicating osteogenic differentiation. The presented glass-based hybrid concept with precisely adjustable material properties holds promise for regenerative purposes. Adaption of degradation kinetics toward physiological relevance is still an unmet challenge of (bio-)glass engineering. We therefore present a glass-derived hybrid material with adjustable degradation. A flexible design concept based on degradable multi-armed oligomers was combined with an
Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi
Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.
The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...
Braun, Alexander; Kuo, Chung-Yen; Shum, C. K.; Wu, Patrick; van der Wal, Wouter; Fotopoulos, Georgia
Glacial Isostatic Adjustment (GIA) modelling in North America relies on relative sea level information which is primarily obtained from areas far away from the uplift region. The lack of accurate geodetic observations in the Great Lakes region, which is located in the transition zone between uplift and subsidence due to the deglaciation of the Laurentide ice sheet, has prevented more detailed studies of this former margin of the ice sheet. Recently, observations of vertical crustal motion from improved GPS network solutions and combined tide gauge and satellite altimetry solutions have become available. This study compares these vertical motion observations with predictions obtained from 70 different GIA models. The ice sheet margin is distinct from the centre and far field of the uplift because the sensitivity of the GIA process towards Earth parameters such as mantle viscosity is very different. Specifically, the margin area is most sensitive to the uppermost mantle viscosity and allows for better constraints of this parameter. The 70 GIA models compared herein have different ice loading histories (ICE-3/4/5G) and Earth parameters including lateral heterogeneities. The root-mean-square differences between the 6 best models and the two sets of observations (tide gauge/altimetry and GPS) are 0.66 and 1.57 mm/yr, respectively. Both sets of independent observations are highly correlated and show a very similar fit to the models, which indicates their consistent quality. Therefore, both data sets can be considered as a means for constraining and assessing the quality of GIA models in the Great Lakes region and the former margin of the Laurentide ice sheet.
We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...
Full Text Available One of the research areas in RFID systems is a tag anti-collision protocol; how to reduce identification time with a given number of tags in the field of an RFID reader. There are two types of tag anti-collision protocols for RFID systems: tree based algorithms and slotted aloha based algorithms. Many anti-collision algorithms have been proposed in recent years, especially in tree based protocols. However, there still have challenges on enhancing the system throughput and stability due to the underlying technologies had faced different limitation in system performance when network density is high. Particularly, the tree based protocols had faced the long identification delay. Recently, a Hybrid Hyper Query Tree (H2QT protocol, which is a tree based approach, was proposed and aiming to speedup tag identification in large scale RFID systems. The main idea of H2QT is to track the tag response and try to predict the distribution of tag IDs in order to reduce collisions. In this paper, we propose a pre-detection tree based algorithm, called the Adaptive Pre-Detection Broadcasting Query Tree algorithm (APDBQT, to avoid those unnecessary queries. Our proposed APDBQT protocol can reduce not only the collisions but the idle cycles as well by using pre-detection scheme and adjustable slot size mechanism. The simulation results show that our proposed technique provides superior performance in high density environments. It is shown that the APDBQT is effective in terms of increasing system throughput and minimizing identification delay.
Maejima, Hiroshi; Murase, Azusa; Sunahori, Hitoshi; Kanetada, Yuji; Otani, Takuya; Yoshimura, Osamu; Tobimatsu, Yoshiko
Reflecting the rapidly aging population, community-based interventions in the form of physical exercise have been introduced to promote the health of elderly persons. Many investigation studies have focused on muscle strength in the lower leg as a potent indicator of the effect of physical exercises. The objective of this study was to assess the effect of long-term daily exercises on neural command in lower leg muscle activations. Twenty-six community-based elderly persons (13 men and 13 women; 69.8 +/- 0.5 years old) participated in this study. Daily exercise was comprised of walking for more than 30 min, stretching, muscle strengthening and balance exercise, and was continued for three months. Muscle strength and surface electromyography of the tibia anterior, rectus femoris, and biceps femoris were measured in maximum isometric voluntary contraction both before and after the intervention. The mean frequency of the firing of motor units was calculated based on fast Fourier transformation of the electromyography. As the results of the intervention, muscle strength increased significantly only in biceps femoris, whereas the mean frequency of motor units decreased significantly in every muscle, indicating that motor unit firing in lower frequency efficiently induces the same or greater strength compared with before the intervention. Thus, synchronization of motor units compensates for the lower frequency of motor unit firing to maintain muscular strength. In conclusion, long-term physical exercises in the elderly can modulate the neural adjustment of lower leg muscles to promote efficient output of muscle strength.
Full Text Available Abstract Background Stroke poses a growing human and economic burden in South Africa. Excess sugar consumption, especially from sugar-sweetened beverages (SSBs, has been associated with increased obesity and stroke risk. Research shows that price increases for SSBs can influence consumption and modelling evidence suggests that taxing SSBs has the potential to reduce obesity and related diseases. This study estimates the potential impact of an SSB tax on stroke-related mortality, costs and health-adjusted life years in South Africa. Methods A proportional multi-state life table-based model was constructed in Microsoft Excel (2010. We used consumption data from the 2012 South African National Health and Nutrition Examination Survey, previously published own and cross price elasticities of SSBs and energy balance equations to estimate changes in daily energy intake and BMI arising from increased SSB prices. Stroke relative risk, and prevalent years lived with disability estimates from the Global Burden of Disease Study and modelled disease epidemiology estimates from a previous study, were used to estimate the effect of the BMI changes on the burden of stroke. Results Our model predicts that an SSB tax may avert approximately 72 000 deaths, 550 000 stroke-related health-adjusted life years and over ZAR5 billion, (USD400 million in health care costs over 20 years (USD296-576 million. Over 20 years, the number of incident stroke cases may be reduced by approximately 85 000 and prevalent cases by about 13 000. Conclusions Fiscal policy has the potential, as part of a multi-faceted approach, to mitigate the growing burden of stroke in South Africa and contribute to the achievement of the target set by the Department of Health to reduce relative premature mortality (less than 60 years from non-communicable diseases by the year 2020.
Bishop, Christopher M
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
AlRukaibi, Fahad; AlKheder, Sharaf; Al-Rukaibi, Duaij; Al-Burait, Abdul-Aziz
Traditional transportation systems' management and operation mainly focused on improving traffic mobility and safety without imposing any environmental concerns. Transportation and environmental issues are interrelated and affected by the same parameters especially at signalized intersections. Additionally, traffic congestion at signalized intersections has a major contribution in the environmental problem as related to vehicle emission, fuel consumption, and delay. Therefore, signalized intersections' design and operation is an important parameter to minimize the impact on the environment. The design and operation of signalized intersections are highly dependent on the base saturation flow rate (BSFR). Highway Capacity Manual (HCM) uses a base-saturation flow rate of 1900-passenger car/h/lane for areas with a population intensity greater than or equal to 250,000 and a value of 1750-passenger car/h/lane for less populated areas. The base-saturation flow rate value in HCM is derived from a field data collected in developed countries. The adopted value in Kuwait is 1800passengercar/h/lane, which is the value that used in this analysis as a basis for comparison. Due to the difference in behavior between drivers in developed countries and their fellows in Kuwait, an adjustment was made to the base-saturation flow rate to represent Kuwait's traffic and environmental conditions. The reduction in fuel consumption and vehicles' emission after modifying the base-saturation flow rate (BSFR increased by 12.45%) was about 34% on average. Direct field measurements of the saturation flow rate were used while using the air quality mobile lab to calculate emissions' rates. Copyright © 2018 Elsevier B.V. All rights reserved.
The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.
Chen, Shi; Liao, Xu; Ma, Hongsheng; Zhou, Longquan; Wang, Xingzhou; Zhuang, Jiancang
The relative gravimeter, which generally uses zero-length springs as the gravity senor, is still as the first choice in the field of terrestrial gravity measurement because of its efficiency and low-cost. Because the drift rate of instrument can be changed with the time and meter, it is necessary for estimating the drift rate to back to the base or known gravity value stations for repeated measurement at regular hour's interval during the practical survey. However, the campaigned gravity survey for the large-scale region, which the distance of stations is far away from serval or tens kilometers, the frequent back to close measurement will highly reduce the gravity survey efficiency and extremely time-consuming. In this paper, we proposed a new gravity data adjustment method for estimating the meter drift by means of Bayesian statistical interference. In our approach, we assumed the change of drift rate is a smooth function depend on the time-lapse. The trade-off parameters were be used to control the fitting residuals. We employed the Akaike's Bayesian Information Criterion (ABIC) for the estimated these trade-off parameters. The comparison and analysis of simulated data between the classical and Bayesian adjustment show that our method is robust and has self-adaptive ability for facing to the unregularly non-linear meter drift. At last, we used this novel approach to process the realistic campaigned gravity data at the North China. Our adjustment method is suitable to recover the time-varied drift rate function of each meter, and also to detect the meter abnormal drift during the gravity survey. We also defined an alternative error estimation for the inversed gravity value at the each station on the basis of the marginal distribution theory. Acknowledgment: This research is supported by Science Foundation Institute of Geophysics, CEA from the Ministry of Science and Technology of China (Nos. DQJB16A05; DQJB16B07), China National Special Fund for Earthquake
Kemppinen, J; Lackner, F
Compact Linear Collider (CLIC) is a 48 km long linear accelerator currently studied at CERN. It is a high luminosity electron-positron collider with an energy range of 0.5-3 TeV. CLIC is based on a two-beam technology in which a high current drive beam transfers RF power to the main beam accelerating structures. The main beam is steered with quadrupole magnets. To reach CLIC target luminosity, the main beam quadrupoles have to be actively pre-aligned within 17 µm in 5 degrees of freedom and actively stabilised at 1 nm in vertical above 1 Hz. To reach the pre-alignment requirement as well as the rigidity required by nano-stabilisation, a system based on eccentric cam movers is proposed for the re-adjustment of the main beam quadrupoles. Validation of the technique to the stringent CLIC requirements was started with tests in one degree of freedom on an eccentric cam mover. This paper describes the dedicated mock-up as well as the tests and measurements carried out with it. Finally, the test results are present...
Wang, Ching-Yun; Tapsoba, Jean De Dieu; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne
In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75 years. Copyright © 2015 John Wiley & Sons, Ltd.
Kubota, Mutsuko; Shindo, Yukari; Kawaharada, Mariko
The objective of this study is to identify the items necessary for an outpatient care program based on the self-adjustment of insulin for type 1 diabetes patients. Two surveys based on the Delphi method were conducted. The survey participants were 41 certified diabetes nurses in Japan. An outpatient care program based on the self-adjustment of insulin was developed based on pertinent published work and expert opinions. There were a total of 87 survey items in the questionnaire, which was developed based on the care program mentioned earlier, covering matters such as the establishment of prerequisites and a cooperative relationship, the basics of blood glucose pattern management, learning and practice sessions for the self-adjustment of insulin, the implementation of the self-adjustment of insulin, and feedback. The participants' approval on items in the questionnaires was defined at 70%. Participants agreed on all of the items in the first survey. Four new parameters were added to make a total of 91 items for the second survey and participants agreed on the inclusion of 84 of them. Items necessary for a type 1 diabetes outpatient care program based on self-adjustment of insulin were subsequently selected. It is believed that this care program received a fairly strong approval from certified diabetes nurses; however, it will be necessary to have the program further evaluated in conjunction with intervention studies in the future. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.
Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan
The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the
1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as
Hocking, Matthew C.; Lochman, John E.
This review paper examines the literature on psychosocial factors associated with adjustment to sickle cell disease and insulin-dependent diabetes mellitus in children through the framework of the transactional stress and coping (TSC) model. The transactional stress and coping model views adaptation to a childhood chronic illness as mediated by…
Full Text Available The burden of disease framework facilitates the assessment of the health impact of diseases through the use of summary measures of population health such as Disability-Adjusted Life Years (DALYs. However, calculating, interpreting and communicating the results of studies using this methodology poses a challenge. The aim of the Burden of Communicable Disease in Europe (BCoDE project is to summarize the impact of communicable disease in the European Union and European Economic Area Member States (EU/EEA MS. To meet this goal, a user-friendly software tool (BCoDE toolkit, was developed. This stand-alone application, written in C++, is open-access and freely available for download from the website of the European Centre for Disease Prevention and Control (ECDC. With the BCoDE toolkit, one can calculate DALYs by simply entering the age group- and sex-specific number of cases for one or more of selected sets of 32 communicable diseases (CDs and 6 healthcare associated infections (HAIs. Disease progression models (i.e., outcome trees for these communicable diseases were created following a thorough literature review of their disease progression pathway. The BCoDE toolkit runs Monte Carlo simulations of the input parameters and provides disease-specific results, including 95% uncertainty intervals, and permits comparisons between the different disease models entered. Results can be displayed as mean and median overall DALYs, DALYs per 100,000 population, and DALYs related to mortality vs. disability. Visualization options summarize complex epidemiological data, with the goal of improving communication and knowledge transfer for decision-making.
Full Text Available Bluetooth Low Energy (BLE and the iBeacons have recently gained large interest for enabling various proximity-based application services. Given the ubiquitously deployed nature of Bluetooth devices including mobile smartphones, using BLE and iBeacon technologies seemed to be a promising future to come. This work started off with the belief that this was true: iBeacons could provide us with the accuracy in proximity and distance estimation to enable and simplify the development of many previously difficult applications. However, our empirical studies with three different iBeacon devices from various vendors and two types of smartphone platforms prove that this is not the case. Signal strength readings vary significantly over different iBeacon vendors, mobile platforms, environmental or deployment factors, and usage scenarios. This variability in signal strength naturally complicates the process of extracting an accurate location/proximity estimation in real environments. Our lessons on the limitations of iBeacon technique lead us to design a simple class attendance checking application by performing a simple form of geometric adjustments to compensate for the natural variations in beacon signal strength readings. We believe that the negative observations made in this work can provide future researchers with a reference on how well of a performance to expect from iBeacon devices as they enter their system design phases.
Barlow, Jane; Smailagic, Nadja; Ferriter, Michael; Bennett, Cathy; Jones, Hannah
Emotional and behavioural problems in children are common. Research suggests that parenting has an important role to play in helping children to become well-adjusted, and that the first few months and years are especially important. Parenting programmes may have a role to play in improving the emotional and behavioural adjustment of infants and toddlers. This review is applicable to parents and carers of children up to three years eleven months although some studies included children up to five years old. To:a) establish whether group-based parenting programmes are effective in improving the emotional and behavioural adjustment of children three years of age or less (i.e. maximum mean age of 3 years 11 months); b) assess the role of parenting programmes in the primary prevention of emotional and behavioural problems. We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, Sociofile, Social Science Citation Index, ASSIA, National Research Register (NRR) and ERIC. The searches were originally run in 2000 and then updated in 2007/8. Randomised controlled trials of group-based parenting programmes that had used at least one standardised instrument to measure emotional and behavioural adjustment. The results for each outcome in each study have been presented, with 95% confidence intervals. Where appropriate the results have been combined in a meta-analysis using a random-effects model. Eight studies were included in the review. There were sufficient data from six studies to combine the results in a meta-analysis for parent-reports and from three studies to combine the results for independent assessments of children's behaviour post-intervention. There was in addition, sufficient information from three studies to conduct a meta-analysis of both parent-report and independent follow-up data. Both parent-report (SMD -0.25; CI -0.45 to -0.06), and independent observations (SMD -0.54; CI -0.84 to -0.23) of children's behaviour produce significant results favouring the
Chen, Yang; Hao, Lina; Yang, Hui; Gao, Jinhai
Ionic polymer metal composite (IPMC) as a new smart material has been widely concerned in the micromanipulation field. In this paper, a novel two-finger gripper which contains an IPMC actuator and an ultrasensitive force sensor is proposed and fabricated. The IPMC as one finger of the gripper for mm-sized objects can achieve gripping and releasing motion, and the other finger works not only as a support finger but also as a force sensor. Because of the feedback signal of the force sensor, this integrated actuating and sensing gripper can complete gripping miniature objects in millimeter scale. The Kriging model is used to describe nonlinear characteristics of the IPMC for the first time, and then the control scheme called simultaneous perturbation stochastic approximation adjusting a proportion integration differentiation parameter controller with a Kriging predictor wavelet filter compensator is applied to track the gripping force of the gripper. The high precision force tracking in the foam ball manipulation process is obtained on a semi-physical experimental platform, which demonstrates that this gripper for mm-sized objects can work well in manipulation applications.
Van Der Wal, W.; Barnhoorn, A.; Stocchi, P.; Drury, M. R.; Wu, P. P.; Vermeersen, B. L.
Ice melting in Greenland and Antarctica can be estimated from GRACE satellite measurements. The largest source of error in these estimates is uncertainty in models for Glacial Isostatic Adjustment (GIA). GIA models that are used to correct the GRACE data have several shortcomings, including (i) mantle viscosity is only varied with depth, and (ii) stress-dependence of viscosity is ignored. Here we attempt to improve on these two issues with the ultimate goal of providing more realistic GIA predictions in areas that are currently ice covered. The improved model is first tested against observations in Fennoscandia, where there is good coverage with GIA observations, before applying it to Greenland. Deformation laws for diffusion and dislocation creep in olivine are taken from a compilation of laboratory experiments. Temperature is obtained from two different sources: surface heatflow maps as input for the heat transfer equation, and seismic velocity anomalies converted to upper mantle temperatures. Grain size and olivine water content are kept as free parameters. Surface loading is provided by an ice loading history that is constructed from constraints on past ice margins and input from climatology. The finite element model includes self-gravitation but not compressibility and background stresses. It is found that the viscosity in Fennoscandia changes in time by two orders of magnitude for a wet rheology with large grain size. The wet rheology provides the best fit to historic sea level data. However, present-day uplift and gravity rates are too low for such a rheology. We apply a wet rheology on Greenland, and simulate a Little Ice Age (LIA) increase in thickness on top of the ICE-5G ice loading history. Preliminary results show a negative geoid rate of magnitude more than 0.5 mm/year due to the LIA increase in ice thickness in combination with the non-linear upper mantle rheology. More tests are necessary to determine the influence of mantle rheology on GIA model
Catalá-López, Ferrán; Fernández de Larrea-Baz, Nerea; Morant-Ginestar, Consuelo; Álvarez-Martín, Elena; Díaz-Guzmán, Jaime; Gènova-Maleras, Ricard
The aim of the present study was to determine the national burden of cerebrovascular diseases in the adult population of Spain. Cross-sectional, descriptive population-based study. We calculated the disability-adjusted life years (DALY) metric using country-specific data from national statistics and epidemiological studies to obtain representative outcomes for the Spanish population. DALYs were divided into years of life lost due to premature mortality (YLLs) and years of life lived with disability (YLDs). DALYs were estimated for the year 2008 by applying demographic structure by sex and age-groups, cause-specific mortality, morbidity data and new disability weights proposed in the recent Global Burden of Disease study. In the base case, neither YLLs nor YLDs were discounted or age-weighted. Uncertainty around DALYs was tested using sensitivity analyses. In Spain, cerebrovascular diseases generated 418,052 DALYs, comprising 337,000 (80.6%) YLLs and 81,052 (19.4%) YLDs. This accounts for 1,113 DALYs per 100,000 population (men: 1,197 and women: 1,033) and 3,912 per 100,000 in those over the age of 65 years (men: 4,427 and women: 2,033). Depending on the standard life table and choice of social values used for calculation, total DALYs varied by 15.3% and 59.9% below the main estimate. Estimates provided here represent a comprehensive analysis of the burden of cerebrovascular diseases at a national level. Prevention and control programmes aimed at reducing the disease burden merit further priority in Spain. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
This research thesis proposes a new formulation of the relativistic implicit direct method, based on the weak formulation of the wave equation which is solved by means of a Newton algorithm. The first part of this thesis deals with the properties of the explicit particle-in-cell (PIC) methods: properties and limitations of an explicit PIC code, linear analysis of a numerical plasma, numerical heating phenomenon, interest of a higher order interpolation function, and presentation of two applications in high density relativistic laser-plasma interaction. The second and main part of this report deals with adapting the direct implicit method to laser-plasma interaction: presentation of the state of the art, formulating of the direct implicit method, resolution of the wave equation. The third part concerns various numerical and physical validations of the ELIXIRS code: case of laser wave propagation in vacuum, demonstration of the adjustable damping which is a characteristic of the proposed algorithm, influence of space-time discretization on energy conservation, expansion of a thermal plasma in vacuum, two cases of plasma-beam unsteadiness in relativistic regime, and then a case of the overcritical laser-plasma interaction
Finnamore, Helen; Le Couteur, James; Hickson, Mary; Busbridge, Mark; Whelan, Kevin; Shovlin, Claire L.
Background Iron deficiency anemia remains a major global health problem. Higher iron demands provide the potential for a targeted preventative approach before anemia develops. The primary study objective was to develop and validate a metric that stratifies recommended dietary iron intake to compensate for patient-specific non-menstrual hemorrhagic losses. The secondary objective was to examine whether iron deficiency can be attributed to under-replacement of epistaxis (nosebleed) hemorrhagic iron losses in hereditary hemorrhagic telangiectasia (HHT). Methodology/Principal Findings The hemorrhage adjusted iron requirement (HAIR) sums the recommended dietary allowance, and iron required to replace additional quantified hemorrhagic losses, based on the pre-menopausal increment to compensate for menstrual losses (formula provided). In a study population of 50 HHT patients completing concurrent dietary and nosebleed questionnaires, 43/50 (86%) met their recommended dietary allowance, but only 10/50 (20%) met their HAIR. Higher HAIR was a powerful predictor of lower hemoglobin (p = 0.009), lower mean corpuscular hemoglobin content (pstopped. Conclusions/significance HAIR values, providing an indication of individuals’ iron requirements, may be a useful tool in prevention, assessment and management of iron deficiency. Iron deficiency in HHT can be explained by under-replacement of nosebleed hemorrhagic iron losses. PMID:24146883
Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G
In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO 2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.
Full Text Available Iron deficiency anemia remains a major global health problem. Higher iron demands provide the potential for a targeted preventative approach before anemia develops. The primary study objective was to develop and validate a metric that stratifies recommended dietary iron intake to compensate for patient-specific non-menstrual hemorrhagic losses. The secondary objective was to examine whether iron deficiency can be attributed to under-replacement of epistaxis (nosebleed hemorrhagic iron losses in hereditary hemorrhagic telangiectasia (HHT.The hemorrhage adjusted iron requirement (HAIR sums the recommended dietary allowance, and iron required to replace additional quantified hemorrhagic losses, based on the pre-menopausal increment to compensate for menstrual losses (formula provided. In a study population of 50 HHT patients completing concurrent dietary and nosebleed questionnaires, 43/50 (86% met their recommended dietary allowance, but only 10/50 (20% met their HAIR. Higher HAIR was a powerful predictor of lower hemoglobin (p = 0.009, lower mean corpuscular hemoglobin content (p<0.001, lower log-transformed serum iron (p = 0.009, and higher log-transformed red cell distribution width (p<0.001. There was no evidence of generalised abnormalities in iron handling Ferritin and ferritin(2 explained 60% of the hepcidin variance (p<0.001, and the mean hepcidinferritin ratio was similar to reported controls. Iron supplement use increased the proportion of individuals meeting their HAIR, and blunted associations between HAIR and hematinic indices. Once adjusted for supplement use however, reciprocal relationships between HAIR and hemoglobin/serum iron persisted. Of 568 individuals using iron tablets, most reported problems completing the course. For patients with hereditary hemorrhagic telangiectasia, persistent anemia was reported three-times more frequently if iron tablets caused diarrhea or needed to be stopped.HAIR values, providing an indication of
Walsh, Deirdre M J; Morrison, Todd G; Conway, Ronan J; Rogers, Eamonn; Sullivan, Francis J; Groarke, AnnMarie
Background: Post traumatic growth (PTG) can be defined as positive change following a traumatic event. The current conceptualization of PTG encompasses five main dimensions, however, there is no dimension which accounts for the distinct effect of a physical trauma on PTG. The purpose of the present research was to test the role of PTG, physical post traumatic growth (PPTG), resilience and mindfulness in predicting psychological and health related adjustment. Method: Ethical approval was obtained from relevant institutional ethics committees. Participants ( N = 241), who were at least 1 year post prostate cancer treatment, were invited to complete a battery of questionnaires either through an online survey or a paper and pencil package received in the post The sample ranged in age from 44 to 88 years ( M = 64.02, SD = 7.76). Data were analysis using confirmatory factor analysis and structural equation modeling. Results: The physical post traumatic growth inventory (P-PTGI) was used to evaluate the role of PPTG in predicting adjustment using structural equation modeling. P-PTGI predicted lower distress and improvement of quality of life, whereas conversely, the traditional PTG measure was linked with poor adjustment. The relationship between resilience and adjustment was found to be mediated by P-PTGI. Conclusion: Findings suggest the central role of PTG in the prostate cancer survivorship experience is enhanced by the inclusion of PPTG. Adjusting to a physical trauma such as illness (internal transgressor) is unlike a trauma with an external transgressor as the physical trauma creates an entirely different framework for adjustment. The current study demonstrates the impact of PPTG on adjustment. This significantly adds to the theory of the development of PTG by highlighting the interplay of resilience with PTG, PPTG, and adjustment.
Deirdre M. J. Walsh
Full Text Available Background: Post traumatic growth (PTG can be defined as positive change following a traumatic event. The current conceptualization of PTG encompasses five main dimensions, however, there is no dimension which accounts for the distinct effect of a physical trauma on PTG. The purpose of the present research was to test the role of PTG, physical post traumatic growth (PPTG, resilience and mindfulness in predicting psychological and health related adjustment.Method: Ethical approval was obtained from relevant institutional ethics committees. Participants (N = 241, who were at least 1 year post prostate cancer treatment, were invited to complete a battery of questionnaires either through an online survey or a paper and pencil package received in the post The sample ranged in age from 44 to 88 years (M = 64.02, SD = 7.76. Data were analysis using confirmatory factor analysis and structural equation modeling.Results: The physical post traumatic growth inventory (P-PTGI was used to evaluate the role of PPTG in predicting adjustment using structural equation modeling. P-PTGI predicted lower distress and improvement of quality of life, whereas conversely, the traditional PTG measure was linked with poor adjustment. The relationship between resilience and adjustment was found to be mediated by P-PTGI.Conclusion: Findings suggest the central role of PTG in the prostate cancer survivorship experience is enhanced by the inclusion of PPTG. Adjusting to a physical trauma such as illness (internal transgressor is unlike a trauma with an external transgressor as the physical trauma creates an entirely different framework for adjustment. The current study demonstrates the impact of PPTG on adjustment. This significantly adds to the theory of the development of PTG by highlighting the interplay of resilience with PTG, PPTG, and adjustment.
Horton, B. P.; Peltier, W. R.; Culver, S. J.; Drummond, R.; Engelhart, S. E.; Kemp, A. C.; Mallinson, D.; Thieler, E. R.; Riggs, S. R.; Ames, D. V.; Thomson, K. H.
We have synthesized new and existing relative sea-level (RSL) data to produce a quality-controlled, spatially comprehensive database from the North Carolina coastline. The RSL database consists of 54 sea-level index points that are quantitatively related to an appropriate tide level and assigned an error estimate, and a further 33 limiting dates that confine the maximum and minimum elevations of RSL. The temporal distribution of the index points is very uneven with only five index points older than 4000 cal a BP, but the form of the Holocene sea-level trend is constrained by both terrestrial and marine limiting dates. The data illustrate RSL rapidly rising during the early and mid Holocene from an observed elevation of -35.7 ± 1.1 m MSL at 11062-10576 cal a BP to -4.2 m ± 0.4 m MSL at 4240-3592 cal a BP. We restricted comparisons between observations and predictions from the ICE-5G(VM2) with rotational feedback Glacial Isostatic Adjustment (GIA) model to the Late Holocene RSL (last 4000 cal a BP) because of the wealth of sea-level data during this time interval. The ICE-5G(VM2) model predicts significant spatial variations in RSL across North Carolina, thus we subdivided the observations into two regions. The model forecasts an increase in the rate of sea-level rise in Region 1 (Albemarle, Currituck, Roanoke, Croatan, and northern Pamlico sounds) compared to Region 2 (southern Pamlico, Core and Bogue sounds, and farther south to Wilmington). The observations show Late Holocene sea-level rising at 1.14 ± 0.03 mm year -1 and 0.82 ± 0.02 mm year -1 in Regions 1 and 2, respectively. The ICE-5G(VM2) predictions capture the general temporal trend of the observations, although there is an apparent misfit for index points older than 2000 cal a BP. It is presently unknown whether these misfits are caused by possible tectonic uplift associated with the mid-Carolina Platform High or a flaw in the GIA model. A comparison of local tide gauge data with the Late Holocene RSL
Ellen A Struijk
Full Text Available BACKGROUND: Disability-Adjusted Life Years (DALYs have the advantage that effects on total health instead of on a specific disease incidence or mortality can be estimated. Our aim was to address several methodological points related to the computation of DALYs at an individual level in a follow-up study. METHODS: DALYs were computed for 33,507 men and women aged 20-70 years when participating in the EPIC-NL study in 1993-7. DALYs are the sum of the Years Lost due to Disability (YLD and the Years of Life Lost (YLL due to premature mortality. Premature mortality was defined as death before the estimated date of individual Life Expectancy (LE. Different methods to compute LE were compared as well as the effect of different follow-up periods using a two-part model estimating the effect of smoking status on health as an example. RESULTS: During a mean follow-up of 12.4 years, there were 69,245 DALYs due to years lived with a disease or premature death. Current-smokers had lost 1.28 healthy years of their life (1.28 DALYs 95%CI 1.10; 1.46 compared to never-smokers. The outcome varied depending on the method used for estimating LE, completeness of disease and mortality ascertainment and notably the percentage of extinction (duration of follow-up of the cohort. CONCLUSION: We conclude that the use of DALYs in a cohort study is an appropriate way to assess total disease burden in relation to a determinant. The outcome is sensitive to the LE calculation method and the follow-up duration of the cohort.
Jia, Haomiao; Zack, Matthew M; Thompson, William W
Being classified as outside the normal range for body mass index (BMI) has been associated with increased risk for chronic health conditions, poor health-related quality of life (HRQOL), and premature death. To assess the impact of BMI on HRQOL and mortality, we compared quality-adjusted life expectancy (QALE) by BMI levels. We obtained HRQOL data from the 1993-2010 Behavioral Risk Factor Surveillance System and life table estimates from the National Center for Health Statistics national mortality files to estimate QALE among U.S. adults by BMI categories: underweight (BMI overweight (BMI 25.0-29.9 kg/m(2)), obese (BMI 30.0-34.9 kg/m(2)), and severely obese (BMI ≥35.0 kg/m(2)). In 2010 in the United States, the highest estimated QALE for adults at 18 years of age was 54.1 years for individuals classified as normal weight. The two lowest QALE estimates were for those classified as either underweight (48.9 years) or severely obese (48.2 years). For individuals who were overweight or obese, the QALE estimates fell between those classified as either normal weight (54.1 years) or severely obese (48.2 years). The difference in QALE between adults classified as normal weight and those classified as either overweight or obese was significantly higher among women than among men, irrespective of race/ethnicity. Using population-based data, we found significant differences in QALE loss by BMI category. These findings are valuable for setting national and state targets to reduce health risks associated with severe obesity, and could be used for cost-effectiveness evaluations of weight-reduction interventions.
Full Text Available Background: The purpose of this research was to study the The effectiveness of group therapy based on quality of life on marital adjustment, marital satisfaction and mood regulation of Bushehr Male abusers. Materials and Methods: In this study which was a quasi-experimental pre-test, post-test with control group, the sample group was selected by clustering sampling method from the men who referred to Bushehr addiction treatment clinics that among them a total of 30 patients randomly divided into two experimental and control groups of 15 individuals. The instrument included short version of the Marital Adjustment Questionnaire, Marital Satisfaction Questionnaire and Garnefski Emotional Regulation Scale that was completed by the participants in the pre-test and post-test stages.The experimental group was treated based on group life quality in eight sessions but the control group did not receive any treatment. Multi-variate covariance analysis is used for statistical analysis of data. Results: The results revealed that after intervention there was a significant difference between two groups in terms of marital adjustment, marital satisfaction and emotional regulation variables (P<0/001.The rate of marital adjustment, marital satisfaction and emotional regulation in experimental group compare with control group and it was significantly higher in post-test. Conclusion: treatment based on quality of life which have formed from combination of positive psychology and cognitive-behavioral approach can increase marital adjustment, marital satisfaction and mood regulation of abusers.
Arslan-Özkan, İlkay; Okumuş, Hülya; Buldukoğlu, Kadriye
To investigate the effects of nursing care based on the Theory of Human Caring on distress caused by infertility, perceived self-efficacy and adjustment levels. Infertility leads to individual, familial and social problems. Nursing care standards for women affected by infertility have yet to emerge. A randomized controlled trial. This study was conducted from May 2010-February 2011, with 105 Turkish women with infertility (intervention group: 52, control group: 53). We collected data using the Infertility Distress Scale, the Turkish-Infertility Self Efficacy Scale Short Form and the Turkish-Fertility Adjustment Scale. The intervention group received nursing care based on the Theory of Human Caring. Data were analysed using t-tests, chi-square tests and intention-to-treat analyses. The intervention and control groups significantly differed with regard to infertility distress, self-efficacy and adjustment levels. The intervention group's mean self-efficacy score increased by seven points and adjustment score decreased by seven points (in a positive direction). In addition, there was a significant reduction in infertility distress scores in the intervention group, but there was no change in the control group. Nursing care based on the Theory of Human Caring decreased the negative impact of infertility in women receiving infertility treatment and increased self-efficacy and adjustment. © 2013 John Wiley & Sons Ltd.
Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.
Antonio M. G. Tommaselli
Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.
Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A
Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
Camargo, C.T.M.; Madeira, A.A.; Pontedeiro, A.C.; Dominguez, L.
The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author) [pt
Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
Schnider Thomas W
Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false When will we determine your income-related... more recent tax year? 418.1201 Section 418.1201 Employees' Benefits SOCIAL SECURITY ADMINISTRATION... Recent Tax Year's Modified Adjusted Gross Income § 418.1201 When will we determine your income-related...
..., such as a copy of your Federal income tax return. If you would like us to use the revised or corrected... corrected information about a more recent tax year's modified adjusted gross income that we used due to your... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What should you do if our initial...
Zweig, Janine M.; Yahner, Jennifer; Dank, Meredith; Lachman, Pamela
Background: We examined whether substance use, psychosocial adjustment, and sexual experiences vary for teen dating violence victims by the type of violence in their relationships. We compared dating youth who reported no victimization in their relationships to those who reported being victims of intimate terrorism (dating violence involving one…
Shaikh, Shahid Ali; Tian, Gang; Shi, Zhanjie; Zhao, Wenke; Junejo, S. A.
Ground penetrating Radar (GPR) is an efficient tool for subsurface geophysical investigations, particularly at shallow depths. The non-destructiveness, cost efficiency, and data reliability are the important factors that make it an ideal tool for the shallow subsurface investigations. Present study encompasses; variations in central frequency of transmitting and receiving GPR antennas (Tx-Rx) have been analyzed and frequency band adjustment match filters are fabricated and tested accordingly. Normally, the frequency of both the antennas remains similar to each other whereas in this study we have experimentally changed the frequencies of Tx-Rx and deduce the response. Instead of normally adopted three pairs, a total of nine Tx-Rx pairs were made from 50 MHz, 100 MHz, and 200 MHz antennas. The experimental data was acquired at the designated near surface geophysics test site of the Zhejiang University, Hangzhou, China. After the impulse response analysis of acquired data through conventional as well as varied Tx-Rx pairs, different swap effects were observed. The frequency band and exploration depth are influenced by transmitting frequencies rather than the receiving frequencies. The impact of receiving frequencies was noticed on the resolution; the more noises were observed using the combination of high frequency transmitting with respect to low frequency receiving. On the basis of above said variable results we have fabricated two frequency band adjustment match filters, the constant frequency transmitting (CFT) and the variable frequency transmitting (VFT) frequency band adjustment match filters. By the principle, the lower and higher frequency components were matched and then incorporated with intermediate one. Therefore, this study reveals that a Tx-Rx combination of low frequency transmitting with high frequency receiving is a better choice. Moreover, both the filters provide better radargram than raw one, the result of VFT frequency band adjustment filter is
Peng, Yahui; Shen, Dinggang; Liao, Shu; Turkbey, Baris; Rais-Bahrami, Soroush; Wood, Bradford; Karademir, Ibrahim; Antic, Tatjana; Yousef, Ambereen; Jiang, Yulei; Pinto, Peter A; Choyke, Peter L; Oto, Aytekin
To determine whether prostate-specific antigen (PSA) levels adjusted by prostate and zonal volumes estimated from magnetic resonance imaging (MRI) improve the diagnosis of prostate cancer (PCa) and differentiation between patients who harbor high-Gleason-sum PCa and those without PCa. This retrospective study was Health Insurance Portability and Accountability Act (HIPAA)-compliant and approved by the Institutional Review Board of participating medical institutions. T2 -weighted MR images were acquired for 61 PCa patients and 100 patients with elevated PSA but without PCa. Computer methods were used to segment prostate and zonal structures and to estimate the total prostate and central-gland (CG) volumes, which were then used to calculate CG volume fraction, PSA density, and PSA density adjusted by CG volume. These quantities were used to differentiate patients with and without PCa. Area under the receiver operating characteristic curve (AUC) was used as the figure of merit. The total prostate and CG volumes, CG volume fraction, and PSA density adjusted by the total prostate and CG volumes were statistically significantly different between patients with PCa and patients without PCa (P ≤ 0.007). AUC values for the total prostate and CG volumes, and PSA density adjusted by CG volume, were 0.68 ± 0.04, 0.68 ± 0.04, and 0.66 ± 0.04, respectively, and were significantly better than that of PSA (P < 0.02), for differentiation of PCa patients from patients without PCa. The total prostate and CG volumes estimated from T2 -weighted MR images and PSA density adjusted by these volumes can improve the effectiveness of PSA for the diagnosis of PCa and differentiation of high-Gleason-sum PCa patients from patients without PCa. © 2015 Wiley Periodicals, Inc.
Chovanec, Josef; Serlingerova, Iveta; Greplova, Kristina
Background: We investigated the efficacy of circulating biomarkers together with histological grade and age to predict deep myometrial invasion (dMI) in endometrial cancer patients. Methods: HE4ren was developed adjusting HE4 serum levels towards decreased glomerular filtration rate as quantified...... based on single-institution data from 120 EC patients and validated against multicentric data from 379 EC patients. Results: In non-cancer individuals, serum HE4 levels increase log-linearly with reduced glomerular filtration of eGFR = 90 ml/min/1.73 m2. HE4ren, adjusting HE4 serum levels to decreased e...
Capitain, Olivier; Asevoaia, Andreaa; Boisdron-Celle, Michele; Poirier, Anne-Lise; Morel, Alain; Gamelin, Erick
To compare the efficacy and safety of pharmacokinetically (PK) guided fluorouracil (5-FU) dose adjustment vs. standard body-surface-area (BSA) dosing in a FOLFOX (folinic acid, fluorouracil, oxaliplatin) regimen in metastatic colorectal cancer (mCRC). A total of 118 patients with mCRC were administered individually determined PK-adjusted 5-FU in first-line FOLFOX chemotherapy. The comparison arm consisted of 39 patients, and these patients were also treated with FOLFOX with 5-FU by BSA. For the PK-adjusted arm 5-FU was monitored during infusion, and the dose for the next cycle was based on a dose-adjustment chart to achieve a therapeutic area under curve range (5-FU(ODPM Protocol)). The objective response rate was 69.7% in the PK-adjusted arm, and median overall survival and median progression-free survival were 28 and 16 months, respectively. In the traditional patients who received BSA dosage, objective response rate was 46%, and overall survival and progression-free survival were 22 and 10 months, respectively. Grade 3/4 toxicity was 1.7% for diarrhea, 0.8% for mucositis, and 18% for neutropenia in the dose-monitored group; they were 12%, 15%, and 25%, respectively, in the BSA group. Efficacy and tolerability of PK-adjusted FOLFOX dosing was much higher than traditional BSA dosing in agreement with previous reports for 5-FU monotherapy PK-adjusted dosing. Analysis of these results suggests that PK-guided 5-FU therapy offers added value to combination therapy for mCRC. Copyright © 2012 Elsevier Inc. All rights reserved.
Thompson, Sandra; Hiebert-Murphy, Diane; Trute, Barry
Based on the adjustment phase of the double ABC-X model of family stress (McCubbin and Patterson, 1983) this study examined the impact of parenting stress, positive appraisal of the impact of child disability on the family, and parental self-esteem on parental perceptions of family adjustment in families of children with disabilities. For mothers,…
Steffen, H.; Kaufmann, G.; Lampe, R.
During the last glacial maximum, a large ice sheet covered Scandinavia, which depressed the earth's surface by several 100 m. In northern central Europe, mass redistribution in the upper mantle led to the development of a peripheral bulge. It has been subsiding since the begin of deglaciation due to the viscoelastic behaviour of the mantle. We analyse relative sea-level (RSL) data of southern Sweden, Denmark, Germany, Poland and Lithuania to determine the lithospheric thickness and radial mantle viscosity structure for distinct regional RSL subsets. We load a 1-D Maxwell-viscoelastic earth model with a global ice-load history model of the last glaciation. We test two commonly used ice histories, RSES from the Australian National University and ICE-5G from the University of Toronto. Our results indicate that the lithospheric thickness varies, depending on the ice model used, between 60 and 160 km. The lowest values are found in the Oslo Graben area and the western German Baltic Sea coast. In between, thickness increases by at least 30 km tracing the Ringkøbing-Fyn High. In Poland and Lithuania, lithospheric thickness reaches up to 160 km. However, the latter values are not well constrained as the confidence regions are large. Upper-mantle viscosity is found to bracket [2-7] × 1020 Pa s when using ICE-5G. Employing RSES much higher values of 2 × 1021 Pa s are obtained for the southern Baltic Sea. Further investigations should evaluate whether this ice-model version and/or the RSL data need revision. We confirm that the lower-mantle viscosity in Fennoscandia can only be poorly resolved. The lithospheric structure inferred from RSES partly supports structural features of regional and global lithosphere models based on thermal or seismological data. While there is agreement in eastern Europe and southwest Sweden, the structure in an area from south of Norway to northern Germany shows large discrepancies for two of the tested lithosphere models. The lithospheric
Juan, Fang; Rongsheng, Wu
Energetics of geostrophic adjustment in rotating flow is examined in detail with a linear shallow water model. The initial unbalanced flow considered first falls tinder two classes. The first is similar to that adopted by Gill and is here referred to as a mass imbalance model, for the flow is initially motionless but with a sea surface displacement. The other is the same as that considered by Rossby and is referred to as a momentum imbalance model since there is only a velocity perturbation in the initial field. The significant feature of the energetics of geostrophic adjustment for the above two extreme models is that although the energy conversion ratio has a large case-to-case variability for different initial conditions, its value is bounded below by 0 and above by 1 / 2. Based on the discussion of the above extreme models, the energetics of adjustment for an arbitrary initial condition is investigated. It is found that the characteristics of the energetics of geostrophic adjustment mentioned above are also applicable to adjustment of the general unbalanced flow under the condition that the energy conversion ratio is redefined as the conversion ratio between the change of kinetic energy and potential energy of the deviational fields.
leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.
Ahm, Malte; Rasmussen, Michael Robdrup
Weather radar data used for urban drainage applications are traditionally adjusted to point ground references, e.g., rain gauges. However, the available rain gauge density for the adjustment is often low, which may lead to significant representativeness errors. Yet, in many urban catchments......, rainfall is often measured indirectly through runoff sensors. This paper presents a method for weather radar adjustment on the basis of runoff observations (Z-Q adjustment) as an alternative to the traditional Z-R adjustment on the basis of rain gauges. Data from a new monitoring station in Aalborg......, Denmark, were used to evaluate the flow-based weather radar adjustment method against the traditional rain-gauge adjustment. The evaluation was performed by comparing radar-modeled runoff to observed runoff. The methodology was both tested on an events basis and multiple events combined. The results...
Chaplin, James E.
An apparatus for increasing the efficiency of a conventional central space heating system is disclosed. The temperature of a fluid heating medium is adjusted based on a measurement of the external temperature, and a system parameter. The system parameter is periodically modified based on a closed loop process that monitors the operation of the heating system. This closed loop process provides a heating medium temperature value that is very near the optimum for energy efficiency.
Huang, Lam Opal; Infante-Rivard, Claire; Labbe, Aurélie
of the minor allele in control-trios can be added to the loglinear model to adjust for TRD. Adjusting the model removes the inflation in the genotype relative risk (RR) estimate and Type 1 error introduced by non-sex-of-parent-specific TRD. We now propose to further extend this model to estimate an imprinting......Transmission ratio distortion (TRD) is a phenomenon where parental transmission of disease allele to the child does not follow the Mendelian inheritance ratio. TRD occurs in a sex-of-parent-specific or non-sex-of-parent-specific manner. An offset computed from the transmission probability...... parameter. Some evidence suggests that more than 1% of all mammalian genes are imprinted. In the presence of imprinting, for example, the offspring inheriting an over-transmitted disease allele from the parent with a higher expression level in a neighboring gene is over-represented in the sample. TRD...
Kashimura, Hiroki; Abe, Manabu; Watanabe, Shingo; Sekiya, Takashi; Ji, Duoying; Moore, John C.; Cole, Jason N. S.; Kravitz, Ben
This study evaluates the forcing, rapid adjustment, and feedback of net shortwave radiation at the surface in the G4 experiment of the Geoengineering Model Intercomparison Project by analysing outputs from six participating models. G4 involves injection of 5 Tg yr-1 of SO2, a sulfate aerosol precursor, into the lower stratosphere from year 2020 to 2069 against a background scenario of RCP4.5. A single-layer atmospheric model for shortwave radiative transfer is used to estimate the direct forcing of solar radiation management (SRM), and rapid adjustment and feedbacks from changes in the water vapour amount, cloud amount, and surface albedo (compared with RCP4.5). The analysis shows that the globally and temporally averaged SRM forcing ranges from -3.6 to -1.6 W m-2, depending on the model. The sum of the rapid adjustments and feedback effects due to changes in the water vapour and cloud amounts increase the downwelling shortwave radiation at the surface by approximately 0.4 to 1.5 W m-2 and hence weaken the effect of SRM by around 50 %. The surface albedo changes decrease the net shortwave radiation at the surface; it is locally strong (˜ -4 W m-2) in snow and sea ice melting regions, but minor for the global average. The analyses show that the results of the G4 experiment, which simulates sulfate geoengineering, include large inter-model variability both in the direct SRM forcing and the shortwave rapid adjustment from change in the cloud amount, and imply a high uncertainty in modelled processes of sulfate aerosols and clouds.
Cichon, Bernardette; Ritz, Christian; Fabiansen, Christian
and morbidity covariates (model 3) as predictors. The predictive performance of the models was compared with the use of 10-fold crossvalidation and quantified with the use of root mean square errors (RMSEs). SF and sTfR were adjusted with the use of regression coefficients from linear models. RESULTS......BACKGROUND: Biomarkers of iron status are affected by inflammation. In order to interpret them in individuals with inflammation, the use of correction factors (CFs) has been proposed. OBJECTIVE: The objective of this study was to investigate the use of regression models as an alternative to the CF...... measured in serum. Generalized additive, quadratic, and linear models were used to model the relation between SF and sTfR as outcomes and CRP and AGP as categorical variables (model 1; equivalent to the CF approach), CRP and AGP as continuous variables (model 2), or CRP and AGP as continuous variables...
Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank
We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each iteration, the user interface and instructions were adjusted based on students' remarks and the teacher's observations. Students' conversations were analyzed on reasoning complexity as a measurement of efficacy of the modeling tool and the instructions. These findings were also used to compose a set of recommendations for teachers and curriculum designers for using and constructing models in the classroom. Our findings suggest that to stimulate scientific reasoning in students working with a drawing-based modeling, tool instruction about the tool and the domain should be integrated. In creating models, a sufficient level of scaffolding is necessary. Without appropriate scaffolds, students are not able to create the model. With scaffolding that is too high, students may show reasoning that incorrectly assigns external causes to behavior in the model.
Chen, I-Jun; Zhang, Hailun; Wei, Bingsi; Guo, Zeyao
This study aimed to evaluate the effects of the gender-role types and child-rearing gender-role attitude of the single-parents, as well as their children's gender role traits and family socio-economic status, on social adjustment. We recruited 458 pairs of single parents and their children aged 8-18 by purposive sampling. The research tools included the Family Socio-economic Status Questionnaire, Sex Role Scales, Parental Child-rearing Gender-role Attitude Scale and Social Adjustment Scale. The results indicated: (a) single mothers' and their daughters' feminine traits were both higher than their masculine traits, and sons' masculine traits were higher than their feminine traits; the majority gender-role type of single parents and their children was androgyny; significant differences were found between children's gender-role types depending on different raiser, the proportion of girls' masculine traits raised by single fathers was significantly higher than those who were raised by single mothers; (b) family socio-economic status and single parents' gender-role types positively influenced parental child-rearing gender-role attitude, which in turn, influenced the children's gender traits, and further affected children's social adjustment. © 2018 International Union of Psychological Science.
Full Text Available A computational tool for testing for a dose-related trend and/or a pairwise difference in the incidence of an occult tumor via an age-adjusted bootstrap-based poly-k test and the original poly-k test is presented in this paper. The poly-k test (Bailer and Portier 1988 is a survival-adjusted Cochran-Armitage test, which achieves robustness to effects of differential mortality across dose groups. The original poly-k test is asymptotically standard normal under the null hypothesis. However, the asymptotic normality is not valid if there is a deviation from the tumor onset distribution that is assumed in this test. Our age-adjusted bootstrap-based poly-k test assesses the significance of assumed asymptotic normal tests and investigates an empirical distribution of the original poly-k test statistic using an age-adjusted bootstrap method. A tumor of interest is an occult tumor for which the time to onset is not directly observable. Since most of the animal carcinogenicity studies are designed with a single terminal sacrifice, the present tool is applicable to rodent tumorigenicity assays that have a single terminal sacrifice. The present tool takes input information simply from a user screen and reports testing results back to the screen through a user-interface. The computational tool is implemented in C/C++ and is applied to analyze a real data set as an example. Our tool enables the FDA and the pharmaceutical industry to implement a statistical analysis of tumorigenicity data from animal bioassays via our age-adjusted bootstrap-based poly-k test and the original poly-k test which has been adopted by the National Toxicology Program as its standard statistical test.
TRICARE, the MHS provides direct care through more than 70 Military Hospitals/Medical Centers, 411 Medical Clinics, 417 Dental Clinics and over 100...of service provided: A - Inpatient Care B - Ambulatory Care C - Dental Care D - Ancillary Services E - Support Services F - Special Programs G...Manual therapy .43 97150 Group therapeutic procedures .27 97530 Therapeutic activities .44 97535 Self care management training .45 97542 Wheelchair
Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.
Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation
Full Text Available Background and Objectives: Orienting its approach based on Islamic viewpoint (Quran and Hadith, this study aimed to investigate effectiveness of forgiveness therapy, inclination to forgive, and marital adjustment in affected women who referred to counseling centers in Tehran. Methods: This study was a semi-experimental research which made use of test-retest methodology and control group. Statistical population contained women who had suffered unfaithfulness by their husbands and referred to counseling centers of Tehran in the summer 2015. A number of 30 samples were selected in purposive and convenience manners and were categorized in two test and control groups. After conduction of pretest, which applied Spanier Dyadic Adjustment Scale and Amnesty Ray et al., members of the test group attended in nine 90-minute weekly sessions of forgiveness therapy based on Islamic viewpoint. Finally, posttest was conducted on both groups with the same tool. Results: Results of multivariable analysis of covariance showed that forgiveness therapy based on Islamic viewpoint has a significant effect on increased levels of willingness to forgive and marital adjustment in women. Conclusion: The results showed that forgiveness therapy based on Islamic viewpoints could be applied in designation of therapeutic interventions.
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that
Ricano Castillo, Juan Manuel; Palomares Gonzalez, Daniel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
The recursive technique of the method of minimum squares is employed to obtain a multivariable model of the self regressive mobile mean type, needed for the design of a multivariable, self-adjustable controller self adjustable multivariable. In this article the employed technique and the results obtained are described with the characterization of the model structure and the parametric estimation. The convergency velocity curves are observed towards the parameters` numerical values. [Espanol] La tecnica recursiva del metodo de los minimos cuadrados se emplea para obtener un modelo multivariable de tipo autorregresivo de promedio movil, necesario para el diseno de un controlador autoajustable muitivariable. En el articulo, se describe la tecnica empleada y los resultados obtenidos con la caracterizacion de la estructura del modelo y la estimacion parametrica. Se observan las curvas de la velocidad de convergencia hacia los valores numericos de los parametros.
Herrera-López, Mauricio; Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario; Jolliffe, Darrick; Romera, Eva M
(1) To examine the psychometric properties of the Basic Empathy Scale (BES) with Spanish adolescents, comparing a two and a three-dimensional structure;(2) To analyse the relationship between the three-dimensional empathy and social and normative adjustment in school. Transversal and ex post facto retrospective study. Confirmatory factorial analysis, multifactorial invariance analysis and structural equations models were used. 747 students (51.3% girls) from Cordoba, Spain, aged 12-17 years (M=13.8; SD=1.21). The original two-dimensional structure was confirmed (cognitive empathy, affective empathy), but a three-dimensional structure showed better psychometric properties, highlighting the good fit found in confirmatory factorial analysis and adequate internal consistent valued, measured with Cronbach's alpha and McDonald's omega. Composite reliability and average variance extracted showed better indices for a three-factor model. The research also showed evidence of measurement invariance across gender. All the factors of the final three-dimensional BES model were direct and significantly associated with social and normative adjustment, being most strongly related to cognitive empathy. This research supports the advances in neuroscience, developmental psychology and psychopathology through a three-dimensional version of the BES, which represents an improvement in the original two-factorial model. The organisation of empathy in three factors benefits the understanding of social and normative adjustment in adolescents, in which emotional disengagement favours adjusted peer relationships. Psychoeducational interventions aimed at improving the quality of social life in schools should target these components of empathy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Mohammad Lagzian; Shamsoddin Nazemi; Fatemeh Dadmand
Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS) in an Iranian Univ...
Full Text Available There are examined equations of dynamics and statics of an adjustable intermediate loop of heat pump carbon dioxide station in this paper. Heat pump station is a part of the combined heat supply system. Control of transferred thermal capacity from the source of low potential heat source is realized by means of changing the speed of circulation of a liquid in the loop and changing the area of a heat-transmitting surface, both in the evaporator, and in the intermediate heat exchanger depending on the operating parameter, for example, external air temperature and wind speed.
Dilber, Daniel; Malcic, Ivan
The Aristotle basic complexity score and the risk adjustment in congenital cardiac surgery-1 method were developed and used to compare outcomes of congenital cardiac surgery. Both methods were used to compare results of procedures performed on our patients in Croatian cardiosurgical centres and results of procedures were taken abroad. The study population consisted of all patients with congenital cardiac disease born to Croatian residents between 1 October, 2002 and 1 October, 2007 undergoing a cardiovascular operation during this period. Of the 556 operations, the Aristotle basic complexity score could be assigned to 553 operations and the risk adjustment in congenital cardiac surgery-1 method to 536 operations. Procedures were performed in two institutions in Croatia and seven institutions abroad. The average complexity for cardiac procedures performed in Croatia was significantly lower. With both systems, along with the increase in complexity, there is also an increase in mortality before discharge and postoperative length of stay. Only after the adjustment for complexity there are marked differences in mortality and occurrence of postoperative complications. Both, the Aristotle basic complexity score and the risk adjustment in congenital cardiac surgery-1 method were predictive of in-hospital mortality as well as prolonged postoperative length to stay, and can be used as a tool in our country to evaluate a cardiosurgical model and recognise potential problems.
Olsen, Jakob Vesterlund; Henningsen, Arne
Based on a theoretical microeconomic model, we develop an empirical framework for analyzing the size and the timing of adjustment costs and investment utilization. We show that adjustment costs and investment utilization result in technical inefficiency, because adjustments require the use...
Fariba Kiani; Seyed Hakime Safavi Mirmahale; Elahe Saberyan; Mohammad Reza Reza Khodabakhsh
Background and Objectives: Orienting its approach based on Islamic viewpoint (Quran and Hadith), this study aimed to investigate effectiveness of forgiveness therapy, inclination to forgive, and marital adjustment in affected women who referred to counseling centers in Tehran. Methods: This study was a semi-experimental research which made use of test-retest methodology and control group. Statistical population contained women who had suffered unfaithfulness by their husbands and referred...
Kanstrén, T.; Piel, E.; Gross, H.G.
One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through
Andersen, Signe Hald; Hansen, Lars Gårn
Despite the strong and persistent influence of Gary Becker’s marriage model, the model does not completely explain the observed correlation between married women’s labor market participation and overall divorce rates. In this paper we show how a simple sociologically inspired extension of the model...... this aspect into Becker’s model, the model provides predictions of divorce rates and causes that fit more closely with empirical observations. (JEL: J1)...
Robins, Robert E.; Delisi, Donald P.
In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.
Zhao, Jun; Lu, Jun
In this paper we propose an efficient method to enhance contrast in real time in digital video streams by exploiting histogram variances and adaptively adjusting gamma curves. The proposed method aims to overcome the limitations of the conventional histogram equalization method, which often produces noisy, unrealistic effects in images. To improve visual quality, we use gamma correction technique and choose different gamma curves according to the histogram variance of the images. By using this scheme, the details of an image can be enhanced while the mean brightness level is kept. Experiment results demonstrate that our method is simple and efficient, and robust for both low and high dynamic scenes, and hence well suited for real-time, high-bit-depth video acquisitions.
Fernando Augusto de Souza
Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.
CTE has two prominent components: the pathophysiology that is detected in the brain postmortem and the symptomology that is present in the interval between retirement and end of life. CTE symptomology has been noted to include memory difficulties, aggression, depression, explosivity, and executive dysfunction at early stages progressing to problems with attention, mood swings, visuospatial difficulties, confusion, progressive dementia, and suicidality (e.g. McKee et al. (2012), Omalu et al. (2010a-c), McKee et al. (2009)). There are a number of assumptions embedded within the current CTE literature: The first is the assumption that CTE symptomology reported by athletes and their families is the product of the pathophysiology change detected post-mortem (e.g. McKee et al. (2009)). At present, there is little scientific evidence to suggest that all CTE symptomology is the product of CTE pathophysiology. It has been assumed that CTE pathophysiology causes CTE symptomology (Meehan et al. (2015), Iverson et al. (2016)) but this link has never been scientifically validated. The purpose of the present work is to provide a multi-factorial theoretical framework to account for the symptomology reported by some athletes who sustain neurotrauma during their careers that will lead to a more systematic approach to understanding post-career symptomology. There is significant overlap between the case reports of athletes with post-mortem diagnoses of CTE, and symptom profiles of those with a history of substance use, chronic pain, and athlete career transition stress. The athlete post-career adjustment (AP-CA) model is intended to explain some of the symptoms that athletes experience at the end of their careers or during retirement. The AP-CA model consists of four elements: neurotrauma, chronic pain, substance use, and career transition stress. Based on the existing literature, it is clear that any one of the four elements of the AP-CA model can account for a significant number of
Manturov, G.; Semenov, M.; Seregin, A.; Lykova, L.
The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system
Full Text Available How well parameterization will improve gross primary production (GPP estimation using the MODerate-resolution Imaging Spectroradiometer (MODIS algorithm has been rarely investigated. We adjusted the parameters in the algorithm for 21 selected eddy-covariance flux towers which represented nine typical plant functional types (PFTs. We then compared these estimates of the MOD17A2 product, by the MODIS algorithm with default parameters in the Biome Property Look-Up Table, and by a two-leaf Farquhar model. The results indicate that optimizing the maximum light use efficiency (εmax in the algorithm would improve GPP estimation, especially for deciduous vegetation, though it could not compensate the underestimation during summer caused by the one-leaf upscaling strategy. Adding the soil water factor to the algorithm would not significantly affect performance, but it could make the adjusted εmax more robust for sites with the same PFT and among different PFTs. Even with adjusted parameters, both one-leaf and two-leaf models would not capture seasonally photosynthetic dynamics, thereby we suggest that further improvement in GPP estimaiton is required by taking into consideration seasonal variations of the key parameters and variables.
White, Jules; Gray, Jeff; Schmidt, Douglas C.
Aspect-oriented modeling (AOM) is a promising technique for untangling the concerns of complex enterprise software systems. AOM decomposes the crosscutting concerns of a model into separate models that can be woven together to form a composite solution model. In many domains, such as multi-tiered e-commerce web applications, separating concerns is much easier than deducing the proper way to weave the concerns back together into a solution model. For example, modeling the types and sizes of caches that can be leveraged by a Web application is much easier than deducing the optimal way to weave the caches back into the solution architecture to achieve high system throughput.
its effectiveness in terms of data representation quality of modeling results, hydrological models usually embedded in Geographical Information. (GIS) environment to simulate various parame attributed to a selected catchment. complex technology highly suitable for spatial temporal data analyses and information extractio.
Yoo, Yang Gyeong; Lee, In Soo
Self-esteem and school adjustment of children in the lower grades of primary school, the beginning stage of school life, have a close relationship with development of personality, mental health and characters of children. Therefore, the present study aimed to verify the effect of school-based Maum Meditation program on children in the lower grades of primary school, as a personality education program. The result showed that the experimental group with application of Maum Meditation program had significant improvements in self-esteem and school adjustment, compared to the control group without the application. In conclusion, since the study provides significant evidence that the intervention of Maum Meditation program had positive effects on self-esteem and school adjustment of children in the early stage of primary school, it is suggested to actively employ Maum Meditation as a school-based meditation program for mental health promotion of children in the early school ages, the stage of formation of personalities and habits. PMID:23777717
A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)
Kor-Anantakul, Ounjai; Suntharasaj, Thitima; Suwanrath, Chitkasaem; Hanprasertpong, Tharangrut; Pranpanus, Savitree; Pruksanusak, Ninlapa; Janwadee, Suthiraporn; Geater, Alan
To establish normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in southern Thai women, and to compare these reference levels with Caucasian-specific and northern Thai models. A cross-sectional study was conducted in 1,150 normal singleton pregnancy women to determine serum pregnancy-associated plasma protein-A (PAPP-A) and free β-human chorionic gonadotropin (β-hCG) concentrations in women from southern Thailand. The predicted median values were compared with published equations for Caucasians and northern Thai women. The best-fitting regression equations for the expected median serum levels of PAPP-A (mIU/L) and free β- hCG (ng/mL) according to maternal weight (Wt in kg) and gestational age (GA in days) were: [Formula: see text] and [Formula: see text] Both equations were selected with a statistically significant contribution (pmodel, the median values of PAPP-A were higher and the median values of free β-hCG were lower in the southern Thai women. And compared with the northern Thai models, the median values of both biomarkers were lower in southern Thai women. The study has successfully developed maternal-weight- and gestational-age-adjusted median normative models to convert the PAPP-A and free β-hCG levels into their Multiple of Median equivalents in southern Thai women. These models confirmed ethnic differences.
Péter Przemyslaw Ujma
Full Text Available Sleep spindles are frequently studied for their relationship with state and trait cognitive variables, and they are thought to play an important role in sleep-related memory consolidation. Due to their frequent occurrence in NREM sleep, the detection of sleep spindles is only feasible using automatic algorithms, of which a large number is available. We compared subject averages of the spindle parameters computed by a fixed frequency (11-13 Hz for slow spindles, 13-15 Hz for fast spindles automatic detection algorithm and the individual adjustment method (IAM, which uses individual frequency bands for sleep spindle detection. Fast spindle duration and amplitude are strongly correlated in the two algorithms, but there is little overlap in fast spindle density and slow spindle parameters in general. The agreement between fixed and manually determined sleep spindle frequencies is limited, especially in case of slow spindles. This is the most likely reason for the poor agreement between the two detection methods in case of slow spindle parameters. Our results suggest that while various algorithms may reliably detect fast spindles, a more sophisticated algorithm primed to individual spindle frequencies is necessary for the detection of slow spindles as well as individual variations in the number of spindles in general.
A. K. M. Arifuzzman
Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.
Whitehouse, Pippa L.; Bentley, Michael J.; Milne, Glenn A.; King, Matt A.; Thomas, Ian D.
We present a glacial isostatic adjustment (GIA) model for Antarctica. This is driven by a new deglaciation history that has been developed using a numerical ice-sheet model, and is constrained to fit observations of past ice extent. We test the sensitivity of the GIA model to uncertainties in the deglaciation history, and seek earth model parameters that minimize the misfit of model predictions to relative sea-level observations from Antarctica. We find that the relative sea-level predictions are fairly insensitive to changes in lithospheric thickness and lower mantle viscosity, but show high sensitivity to changes in upper mantle viscosity and constrain this value (95 per cent confidence) to lie in the range 0.8-2.0 × 1021 Pa s. Significant misfits at several sites may be due to errors in the deglaciation history, or unmodelled effects of lateral variations in Earth structure. When we compare our GIA model predictions with elastic-corrected GPS uplift rates we find that the predicted rates are biased high (weighted mean bias = 1.8 mm yr-1) and there is a weighted root-mean-square (WRMS) error of 2.9 mm yr-1. In particular, our model systematically over-predicts uplift rates in the Antarctica Peninsula, and we attempt to address this by adjusting the Late Holocene loading history in this region, within the bounds of uncertainty of the deglaciation model. Using this adjusted model the weighted mean bias improves from 1.8 to 1.2 mm yr-1, and the WRMS error is reduced to 2.3 mm yr-1, compared with 4.9 mm yr-1 for ICE-5G v1.2 and 5.0 mm yr-1 for IJ05. Finally, we place spatially variable error bars on our GIA uplift rate predictions, taking into account uncertainties in both the deglaciation history and modelled Earth viscosity structure. This work provides a new GIA correction for the GRACE data in Antarctica, thus permitting more accurate constraints to be placed on current ice-mass change.
Breno Rodrigues Mendes
Full Text Available This study generate individual tree non-linear models from differential equation and evaluated the adjustment quality to express the basal area growth. The data base is from continuous forest inventory of clonal Eucalyptus spp. plantations, given by Aracruz Cellulose Company, located in the Brazilian costal region, Bahia and Espirito Santo states. The model precision was verified by ratio likelihood test, by mean square error (MSE and by graphical residual analysis. The results showed that the complete model with 3 parameters, developed from the original model with one regressor, was superior to the other models, due to the inclusion of stand based variables, such as: clone, total height (HT, dominant height (HD, quadratic diameter (Dg, Basal Area (G, site index (IS and Density (N, generating a new model, called Complete Model III. The improvement of the precision was highly significant when compared to another models. Consequently, this model provides information with a high degree of precision and accuracy for the forest companies planning.
Trejos, Ana María; Reyes, Lizeth; Bahamon, Marly Johana; Alarcón, Yolima; Gaviria, Gladys
A study in five Colombian cities in 2006, confirms the findings of other international studies: the majority of HIV-positive children not know their diagnosis, caregivers are reluctant to give this information because they believe that the news will cause emotional distress to the child becoming primary purpose of this study to validate a model of revelation. We implemented a clinical model, referred to as: "DIRE" that hypothetically had normalizing effects on psychological adjustment and adherence to antiretroviral treatment of HIV seropositive children, using a quasi-experimental design. Test were administered (questionnaire to assess patterns of disclosure and non-disclosure of the diagnosis of VIH/SIDA on children in health professionals and participants caregivers, Family Apgar, EuroQol EQ- 5D, MOS Social Support Survey Questionnaire Information treatment for VIH/SIDA and child Symptom Checklist CBCL/6-18 adapted to Latinos) before and after implementation of the model to 31 children (n: 31), 30 caregivers (n: 30) and 41 health professionals. Data processing was performed using the Statistical Package for the Social Science version 21 by applying parametric tests (Friedman) and nonparametric (t Student). No significant differences in adherence to treatment (p=0.392), in the psychological adjustment were found positive significant differences at follow-ups compared to baseline 2 weeks (p: 0.001), 3 months (p: 0.000) and 6 months (p: 0.000). The clinical model demonstrated effectiveness in normalizing of psychological adjustment and maintaining treatment compliance. The process also generated confidence in caregivers and health professionals in this difficult task.
Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy
There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes. (c) 2015 APA, all rights reserved).
Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...
A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.
To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL
Jungwirth, Patrick; Badawy, Abdel-Hameed
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Young, Patricia A.
Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…
Schmidt, Silvio; Kemfert, Claudia; Hoeppe, Peter
Economic losses caused by tropical cyclones have increased dramatically. Historical changes in losses are a result of meteorological factors (changes in the incidence of severe cyclones, whether due to natural climate variability or as a result of human activity) and socio-economic factors (increased prosperity and a greater tendency for people to settle in exposed areas). This paper aims to isolate the socio-economic effects and ascertain the potential impact of climate change on this trend. Storm losses for the period 1950-2005 have been adjusted to the value of capital stock in 2005 so that any remaining trend cannot be ascribed to socio-economic developments. For this, we introduce a new approach to adjusting losses based on the change in capital stock at risk. Storm losses are mainly determined by the intensity of the storm and the material assets, such as property and infrastructure, located in the region affected. We therefore adjust the losses to exclude increases in the capital stock of the affected region. No trend is found for the period 1950-2005 as a whole. In the period 1971-2005, since the beginning of a trend towards increased intense cyclone activity, losses excluding socio-economic effects show an annual increase of 4% per annum. This increase must therefore be at least due to the impact of natural climate variability but, more likely than not, also due to anthropogenic forcings.
Helena M. Hauss
Full Text Available Experiments that directly test larval fish individual-based model (IBM growth predictions are uncommon since it is difficult to simultaneously measure all relevant metabolic and behavioural attributes. We compared observed and modelled somatic growth of larval herring (Clupea harengus in short-term (50 degree-day laboratory trials conducted at 7 and 13°C in which larvae were either unfed or fed ad libitum on different prey sizes (~100 to 550 µm copepods, Acartia tonsa. The larval specific growth rate (SGR, % DW d-1 was generally overestimated by the model, especially for larvae foraging on large prey items. Model parameterisations were adjusted to explore the effect of 1 temporal variability in foraging of individuals, and 2 reduced assimilation efficiency due to rapid gut evacuation at high feeding rates. With these adjustments, the model described larval growth well across temperatures, prey sizes, and larval sizes. Although the experiments performed verified the growth model, variability in growth and foraging behaviour among larvae shows that it is necessary to measure both the physiology and feeding behaviour of the same individual. This is a challenge for experimentalists but will ultimately yield the most valuable data to adequately model environmental impacts on the survival and growth of marine fish early life stages.
Kim, Byeongchang; Lee, Jinsik; Lee, Gary Geunbae
One of the enduring problems in developing high-quality TTS (text-to-speech) system is pitch contour generation. Considering language specific knowledge, an adjusted Fujisaki model for Korean TTS system is introduced along with refined machine learning features. The results of quantitative and qualitative evaluations show the validity of our system: the accuracy of the phrase command prediction is 0.8928; the correlations of the predicted amplitudes of a phrase command and an accent command are 0.6644 and 0.6002, respectively; our method achieved the level of "fair" naturalness (3.6) in a MOS scale for generated F0 curves.
Full Text Available In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part and sonar (for its underwater part scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology.
Moisan, Emmanuel; Charbonnier, Pierre; Foucher, Philippe; Grussenmeyer, Pierre; Guillemin, Samuel; Koehl, Mathieu
In this paper, we focus on the construction of a full 3D model of a canal tunnel by combining terrestrial laser (for its above-water part) and sonar (for its underwater part) scans collected from static acquisitions. The modeling of such a structure is challenging because the sonar device is used in a narrow environment that induces many artifacts. Moreover, the location and the orientation of the sonar device are unknown. In our approach, sonar data are first simultaneously denoised and meshed. Then, above- and under-water point clouds are co-registered to generate directly the full 3D model of the canal tunnel. Faced with the lack of overlap between both models, we introduce a robust algorithm that relies on geometrical entities and partially-immersed targets, which are visible in both the laser and sonar point clouds. A full 3D model, visually promising, of the entrance of a canal tunnel is obtained. The analysis of the method raises several improvement directions that will help with obtaining more accurate models, in a more automated way, in the limits of the involved technology.
National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...
Models Become Much More Efficient and Effective When Coupled With Knowledge Design Advisors CAD Fit Machine Motion KanBan Trigger Models Tolerance...Based Enterprise Geometry Kinematics Design Advisors Control Physics Planning System Models CAD Fit Machine Motion KanBan Trigger Models Tolerance
Sun, W; Jiang, M; Yin, F
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results
Sun, W; Jiang, M; Yin, F [Duke University Medical Center, Durham, NC (United States)
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results
Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.
Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low
Wadsworth, Martha E; Rindlaub, Laura; Hurwich-Reiss, Eliana; Rienks, Shauna; Bianco, Hannah; Markman, Howard J
This study tests key tenets of the Adaptation to Poverty-related Stress Model. This model (Wadsworth, Raviv, Santiago, & Etter, 2011 ) builds on Conger and Elder's family stress model by proposing that primary control coping and secondary control coping can help reduce the negative effects of economic strain on parental behaviors central to the family stress model, namely, parental depressive symptoms and parent-child interactions, which together can decrease child internalizing and externalizing problems. Two hundred seventy-five co-parenting couples with children between the ages of 1 and 18 participated in an evaluation of a brief family strengthening intervention, aimed at preventing economic strain's negative cascade of influence on parents, and ultimately their children. The longitudinal path model, analyzed at the couple dyad level with mothers and fathers nested within couple, showed very good fit, and was not moderated by child gender or ethnicity. Analyses revealed direct positive effects of primary control coping and secondary control coping on mothers' and fathers' depressive symptoms. Decreased economic strain predicted more positive father-child interactions, whereas increased secondary control coping predicted less negative mother-child interactions. Positive parent-child interactions, along with decreased parent depression and economic strain, predicted child internalizing and externalizing over the course of 18 months. Multiple-group models analyzed separately by parent gender revealed, however, that child age moderated father effects. Findings provide support for the adaptation to poverty-related stress model and suggest that prevention and clinical interventions for families affected by poverty-related stress may be strengthened by including modules that address economic strain and efficacious strategies for coping with strain.
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Kolovos, Spyros; Bosmans, Judith E.; Riper, Heleen
eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined......, and DES models in seven.ConclusionThere were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation...
Full Text Available The control system of a doubly-fed adjustable-speed pumped-storage hydropower plant needs phase-locked loops (PLLs to obtain the phase angle of grid voltage. The main drawback of a comb-filter-based phase-locked loop (CF-PLL is the slow dynamic response. This paper presents a modified comb-filter-based phase-locked loop (MCF-PLL by improving the pole-zero pattern of the comb filter, and gives the parameters’ setting method of the controller, based on the discrete model of MCF-PLL. In order to improve the disturbance resistibility of MCF-PLL when the power grid’s frequency changes, this paper proposes a frequency-adaptive modified, comb-filter-based, phase-locked loop (FAMCF-PLL and its digital implementation scheme. Experimental results show that FAMCF-PLL has good steady-state and dynamic performance under distorted grid conditions. Furthermore, FAMCF-PLL can determine the phase angle of the grid voltage, which is locked when it is applied to a doubly-fed adjustable-speed pumped-storage hydropower experimental platform.
Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.
Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.
Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel
In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....
Full Text Available To establish normative weight-adjusted models for the median levels of first trimester serum biomarkers for trisomy 21 screening in southern Thai women, and to compare these reference levels with Caucasian-specific and northern Thai models.A cross-sectional study was conducted in 1,150 normal singleton pregnancy women to determine serum pregnancy-associated plasma protein-A (PAPP-A and free β-human chorionic gonadotropin (β-hCG concentrations in women from southern Thailand. The predicted median values were compared with published equations for Caucasians and northern Thai women.The best-fitting regression equations for the expected median serum levels of PAPP-A (mIU/L and free β- hCG (ng/mL according to maternal weight (Wt in kg and gestational age (GA in days were: [Formula: see text] and [Formula: see text] Both equations were selected with a statistically significant contribution (p< 0.05. Compared with the Caucasian model, the median values of PAPP-A were higher and the median values of free β-hCG were lower in the southern Thai women. And compared with the northern Thai models, the median values of both biomarkers were lower in southern Thai women.The study has successfully developed maternal-weight- and gestational-age-adjusted median normative models to convert the PAPP-A and free β-hCG levels into their Multiple of Median equivalents in southern Thai women. These models confirmed ethnic differences.
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
Řezáč, M.; Kůrka, A.; Růžička, Vlastimil; Heneberg, P.
Roč. 70, č. 5 (2015), s. 645-666 ISSN 0006-3088 Grant - others:MZe(CZ) RO0415 Institutional support: RVO:60077344 Keywords : evidence-based conservation * extinction risk * invertebrate surveys Subject RIV: EG - Zoology Impact factor: 0.719, year: 2015
Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.
Cogburn, Courtney D.; Chavous, Tabbye M.; Griffin, Tiffany M.
The present study examined school-based racial and gender discrimination experiences among African American adolescents in Grade 8 (n = 204 girls; n = 209 boys). A primary goal was exploring gender variation in frequency of both types of discrimination and associations of discrimination with academic and psychological functioning among girls and boys. Girls and boys did not vary in reported racial discrimination frequency, but boys reported more gender discrimination experiences. Multiple reg...
Mohammad Ali Cheraghi
Full Text Available Delirium is the most common problem in patients in intensive care units. Prevention of delirium is more important than treatment. The aim of this study is to determine the effect of the NICE-adjusted multifactorial intervention to prevent delirium in open heart surgery patients. Methods: This study is a quasi-experimental study on 88 patients (In each group, 44 patients undergoing open heart surgery in the intensive care unit of Imam Khomeini Hospital, Tehran. Subjects received usual care group, only the incidence of delirium were studied. So that patients in the two groups of second to fifth postoperative day, twice a day by the researcher, and CAM-ICU questionnaire were followed. After completion of the sampling in this group, in the intervention group also examined incidence of delirium was conducted in the same manner except that multifactorial interventions based on the intervention of NICE modified by the researcher on the second day to fifth implementation and intervention on each turn, their implementation was followed. As well as to check the quality of sleep and pain in the intervention group of CPOT and Pittsburgh Sleep assessment tools were used. Data analysis was done using the SPSS software, version 16. A T-test, a chi-square test, and a Fisher’s exact test were also carried out. Results: The incidence of delirium in the control group was 42.5%; and in the intervention group, it was 22.5%. The result showed the incidence of delirium in open heart surgery hospitalized patients after multifactorial intervention based on adjusted NICE guidelines has been significantly reduced. Conclusion: The NICE-adjusted multifactorial intervention guidelines for the prevention of delirium in cardiac surgery patients significantly reduced the incidence of delirium in these patients. So, using this method as an alternative comprehensive and reliable in preventing delirium in hospitalized patients in the ward heart surgery is recommended.
Peng, Yingwei; Taylor, Jeremy M G
Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.
Full Text Available Assessing the success of information systems within organizations has been identified as one of the most critical subjects of information system management in both public and private organizations. It is therefore important to measure the success of information systems from the user's perspective. The purpose of the current study was to evaluate the degree of information system success by the adjusted DeLone and McLean’s model in the field financial information system (FIS in an Iranian University. The relationships among the dimensions in an extended systems success measurement framework were tested. Data were collected by questionnaire from end-users of a financial information system at Ferdowsi University of Mashhad. The adjusted DeLone and McLean model was contained five variables (system quality, information quality, system use, user satisfaction, and individual impact. The results revealed that system quality was significant predictor of system use, user satisfaction and individual impact. Information quality was also a significant predictor of user satisfaction and individual impact, but not of system use. System use and user satisfaction were positively related to individual impact. The influence of user satisfaction on system use was insignificant
Jesús M. Almendros-Jiménez
Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.
Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya
Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.
Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der
In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the
McKean, John R.; Johnson, Donn; Taylor, R. Garth
An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.
In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)
Burrows, Wesley; Doherty, John
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Sera, Dezso; Teodorescu, Remus; Rodriguez, Pedro
This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell...
Model-based design allows teams to start the design process from a high-level model that is gradually refined through abstraction levels to ultimately yield a prototype. This book describes the main facets of heterogeneous system design. It focuses on multi-core methodological issues, real-time analysis, and modeling and validation
Curtin, Katherine B; Norris, Deborah
The Fear-Avoidance Model of Chronic Pain proposed by Vlaeyen and Linton states individuals enter a cycle of chronic pain due to predisposing psychological factors, such as negative affectivity, negative appraisal or anxiety sensitivity. They do not, however, address the closely related concept of anxious rumination. Although Vlaeyen and Linton suggest cognitive-behavioral treatment methods for chronic pain patients who exhibit pain-related fear, they do not consider mindfulness treatments. This cross-sectional study investigated the relationship between chronic musculoskeletal pain (CMP), ruminative anxiety and mindfulness to determine if (1) ruminative anxiety is a risk factor for developing chronic pain and (2) mindfulness is a potential treatment for breaking the cycle of chronic pain. Middle-aged adults ages 35-50 years (N=201) with self-reported CMP were recruited online. Participants completed standardized questionnaires assessing elements of chronic pain, anxiety, and mindfulness. Ruminative anxiety was positively correlated with pain catastrophizing, pain-related fear and avoidance, pain interference, and pain severity but negatively correlated with mindfulness. High ruminative anxiety level predicted significantly higher elements of chronic pain and significantly lower level of mindfulness. Mindfulness significantly predicted variance (R 2 ) in chronic pain and anxiety outcomes. Pain severity, ruminative anxiety, pain catastrophizing, pain-related fear and avoidance, and mindfulness significantly predicted 70.0% of the variance in pain interference, with pain severity, ruminative anxiety and mindfulness being unique predictors. The present study provides insight into the strength and direction of the relationships between ruminative anxiety, mindfulness and chronic pain in a CMP population, demonstrating the unique associations between specific mindfulness factors and chronic pain elements. It is possible that ruminative anxiety and mindfulness should be
Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan
Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.
Luciana Spica Almilia
Full Text Available This study examines the effect overconfidence and experience on increasing or reducing the information order effect in investment decision making. Subject criteria in this research are: professional investor (who having knowledge and experience in the field of investment and stock market and nonprofessional investor (who having knowledge in the field of investment and stock market. Based on the subject criteria, then subjects in this research include: accounting students, capital market and investor. This research is using experimental method of 2 x 2 (between subjects. The researcher in conducting this experimental research is using web based. The characteristic of individual (high confidence and low confidence is measured by calibration test. Independent variable used in this research consist of 2 active independent variables (manipulated which are as the followings: (1 Pattern of information presentation (step by step and end of sequence; and (2 Presentation order (good news – bad news or bad news – good news. Dependent variable in this research is a revision of investment decision done by research subject. Participants in this study were 78 nonprofessional investor and 48 professional investors. The research result is consistent with that predicted that individuals who have a high level of confidence that will tend to ignore the information available, the impact on individuals with a high level of confidence will be spared from the effects of the information sequence. Keywords: step by step, end of sequence, investment judgement, overconfidence, experimental method
Kantola, I. B.; Blanc-Betes, E.; Gomez-Casanovas, N.; Masters, M. D.; Bernacchi, C.; DeLucia, E. H.
Increased variability and intensity of precipitation in the Midwest agricultural belt due to climate change is a major concern. The success of perennial bioenergy crops in replacing maize for bioethanol production is dependent on sustained yields that exceed maize, and the marketing of perennial crops often emphasizes the resilience of perennial agriculture to climate stressors. Land conversion from maize for bioethanol to Miscanthus x giganteus (miscanthus) increases yields and annual evapotranspiration rates (ET). However, establishment of miscanthus also increases biome water use efficiency (the ratio between net ecosystem productivity after harvest and ET), due to greater belowground biomass in miscanthus than in maize or soybean. In 2012, a widespread drought reduced the yield of 5-year-old miscanthus plots in central Illinois by 36% compared to the previous two years. Eddy covariance data indicated continued soil water deficit during the hydrologically-normal growing season in 2013 and miscanthus yield failed to rebound as expected, lagging behind pre-drought yields by an average of 53% over the next three years. In early 2014, nitrogen fertilizer was applied to half of mature (7-year-old) miscanthus plots in an effort to improve yields. In plots with annual post-emergence application of 60 kg ha-1 of urea, peak biomass was 29% greater than unfertilized miscanthus in 2014, and 113% greater in 2015, achieving statistically similar yields to the pre-drought average. Regional-scale models of perennial crop productivity use 30-year climate averages that are inadequate for predicting long-term effects of short-term extremes on perennial crops. Modeled predictions of perennial crop productivity incorporating repeated extreme weather events, observed crop response, and the use of management practices to mitigate water deficit demonstrate divergent effects on predicted yields.
Agapi I. Doulgeraki
Full Text Available The emergence of methicillin-resistant Staphylococcus aureus (MRSA in food has provoked a great concern about the presence of MRSA in associated foodstuff. Although MRSA is often detected in various retailed meat products, it seems that food handlers are more strongly associated with this type of food contamination. Thus, it can be easily postulated that any food could be contaminated with this pathogen in an industrial environment or in household and cause food poisoning. To this direction, the effect of rocket (Eruca sativa extract on MRSA growth and proteome was examined in the present study. This goal was achieved with the comparative study of the MRSA strain COL proteome, cultivated in rocket extract versus the standard Luria-Bertani growth medium. The obtained results showed that MRSA was able to grow in rocket extract. In addition, proteome analysis using 2-DE method showed that MRSA strain COL is taking advantage of the sugar-, lipid-, and vitamin-rich substrate in the liquid rocket extract, although its growth was delayed in rocket extract compared to Luria–Bertani medium. This work could initiate further research about bacterial metabolism in plant-based media and defense mechanisms against plant-derived antibacterials.
Fürst, Matthias Alois; Durey, Maëlle; Nash, David Richard
gains access to the ants' nests by mimicking their cuticular hydrocarbon recognition cues, which allows the parasites to blend in with their host ants. Myrmica rubra may be particularly susceptible to exploitation in this fashion as it has large, polydomous colonies with many queens and a very viscous...... population structure. We studied the mutual aggressive behaviour of My. rubra colonies based on predictions for recognition effectiveness. Three hypotheses were tested: first, that aggression increases with distance (geographical, genetic and chemical); second, that the more queens present in a colony...... and therefore the less-related workers within a colony, the less aggressively they will behave; and that colonies facing parasitism will be more aggressive than colonies experiencing less parasite pressure. Our results confirm all these predictions, supporting flexible aggression behaviour in Myrmica ants...
Probst, Christian W.; Hansen, René Rydhof
to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...
Narayanan, Martina K; Nærde, Ane
While there is substantial empirical work on maternal depression, less is known about how mothers' and fathers' depressive symptoms compare in their association with child behavior problems in early childhood. In particular, few studies have examined unique relationships in the postpartum period by controlling for the other parent, or looked at longitudinal change in either parent's depressive symptoms across the first living years as a predictor of child problems. We examined depressive symptoms in parents at 6, 12, 24, 36 and 48 months following childbirth, and child behavior problems at 48 months. Linear growth curve analysis was used to model parents' initial levels and changes in symptoms across time and their associations with child outcomes. Mothers' depressive symptoms at 6 months predicted behavior problems at 48 months for all syndrome scales, while fathers' did not. Estimates for mothers' symptoms were significantly stronger on all subscales. Change in fathers' depressive symptoms over time was a significantly larger predictor of child aggressive behavior than corresponding change in mothers'. No interaction effects between parents' symptoms on behavior problems appeared, and few child gender differences. Child behavior was assessed once precluding tests for bidirectional effects. We only looked at linear change in parental symptoms. Mothers' postpartum depressive symptoms are a stronger predictor for early child behavior problems than fathers'. Change in fathers' depressive symptoms across this developmental period was uniquely and strongly associated with child aggressive problems, and should therefore be addressed in future research and clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...
Bandini, Marco; Pompe, Raisa S; Marchioni, Michele; Tian, Zhe; Gandaglia, Giorgio; Fossati, Nicola; Tilki, Derya; Graefen, Markus; Montorsi, Francesco; Shariat, Shahrokh F; Briganti, Alberto; Saad, Fred; Karakiewicz, Pierre I
Contemporary data regarding the effect of local treatment (LT) vs. non-local treatment (NLT) on cancer-specific mortality (CSM) in elderly men with localized prostate cancer (PCa) are lacking. Hence, we evaluated CSM rates in a large population-based cohort of men with cT1-T2 PCa according to treatment type. Within the SEER database (2004-2014), we identified 44,381 men ≥ 75 years with cT1-T2 PCa. Radical prostatectomy and radiotherapy patients were matched and the resulting cohort (LT) was subsequently matched with NLT patients. Cumulative incidence and competing risks regression (CRR) tested CSM according to treatment type. Analyses were repeated after Gleason grade group (GGG) stratification: I (3 + 3), II (3 + 4), III (4 + 3), IV (8), and V (9-10). Overall, 4715 (50.0%) and 4715 (50.0%) men, respectively, underwent NLT and LT. Five and 7-year CSM rates for, respectively, NLT vs. LT patients were 3.0 and 5.4% vs. 1.5 and 2.1% for GGG II, 4.5 and 7.2% vs. 2.5 and 2.8% for GGG III, 7.1 and 10.0% vs. 3.5 and 5.1% for GGG IV, and 20.0 and 26.5% vs. 5.4 and 9.3% for GGG V patients. Separate multivariable CRR also showed higher CSM rates in NLT patients with GGG II [hazard ratio (HR) 3.3], GGG III (HR 2.6), GGG IV (HR 2.4) and GGG V (HR 2.6), but not in GGG I patients (p = 0.5). Despite advanced age, LT provides clinically meaningful and statistically significant benefit relative to NLT. Such benefit was exclusively applied to GGG II to V but not to GGG I patients.
Fürst, Matthias A; Durey, Maëlle; Nash, David R
Social insect colonies are like fortresses, well protected and rich in shared stored resources. This makes them ideal targets for exploitation by predators, parasites and competitors. Colonies of Myrmica rubra ants are sometimes exploited by the parasitic butterfly Maculinea alcon. Maculinea alcon gains access to the ants' nests by mimicking their cuticular hydrocarbon recognition cues, which allows the parasites to blend in with their host ants. Myrmica rubra may be particularly susceptible to exploitation in this fashion as it has large, polydomous colonies with many queens and a very viscous population structure. We studied the mutual aggressive behaviour of My. rubra colonies based on predictions for recognition effectiveness. Three hypotheses were tested: first, that aggression increases with distance (geographical, genetic and chemical); second, that the more queens present in a colony and therefore the less-related workers within a colony, the less aggressively they will behave; and that colonies facing parasitism will be more aggressive than colonies experiencing less parasite pressure. Our results confirm all these predictions, supporting flexible aggression behaviour in Myrmica ants depending on context.
Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally
The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.
Sun, Kai-Wei; Li, Ran; Zhang, Guo-Feng
This paper will investigate a four-stroke quantum heat engine based on the Tavis-Cummings model. The cycle of the heat engine is similar to the Otto cycle in classical thermodynamics. The relationship between output power as well as cycle efficiency and external physical system parameters are given. Under this condition, the entanglement behavior of the system will be studied. The system can show considerable entanglement by strictly controlling relevant parameters. Unlike common two-level quantum heat engines, efficiency is a function of temperature, showing interesting and unexpected phenomena. Several ways to adjust engine properties by external parameters are proposed, with which the output power and efficiency can be optimized. The heat engine model exhibits high efficiency and output power with the participation of a small number of photons, and decay rapidly as the number of photons increases in entangled area but shows interesting behaviors in non-entangled area of photon numbers.
Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham
Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.
The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...
Rodríguez, María Soledad; Tinajero, Carolina; Páramo, María Fernanda
Transition to university is a multifactorial process to which scarce consideration has been given in Spain, despite this being one of the countries with the highest rates of academic failure and attrition within the European Union. The present study proposes an empirical model for predicting Spanish students' academic achievement at university by considering pre-entry characteristics, perceived social support and adaptation to university, in a sample of 300 traditional first-year university students. The findings of the path analysis showed that pre-university achievement and academic and personal-emotional adjustment were direct predictors of academic achievement. Furthermore, gender, parents' education and family support were indirect predictors of academic achievement, mediated by pre-university grades and adjustment to university. The current findings supporting evidence that academic achievement in first-year Spanish students is the cumulative effect of pre-entry characteristics and process variables, key factors that should be taken into account in designing intervention strategies involving families and that establish stronger links between research findings and university policies.
Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.
Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
Moving beyond simply documenting that political violence negatively impacts children, a social ecological hypothesis for relations between political violence and child outcomes was tested. Participants were 700 mother-child (M=12.1years, SD=1.8) dyads from 18 working class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children’s reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems and child outcomes were less consistent for nonsectarian community violence. Support was found for a social ecological model for relations between political violence and child outcomes among both single and two parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children’s functioning are discussed. PMID:20604605
Nizamani, Sarwat; Memon, Nasrullah
In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...... authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron data set, while 89.5% accuracy has been achieved on authors' constructed real email data set. The results on Enron data set have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1...
Full Text Available As an important starting point for optimizing the structure of agricultural products and implementing green production methods, the direction of orchard management development is directly related to the success of “supply side” reform in the fruit industry in China. However, in the context of the progressive aging of the rural labor force, is the old labor force still capable of the high labor intensity and fine cultivation management needed, such as for pruning, or to maintain or improve the application efficiency of fertilizers? In this paper, based on the micro-production data of peach farmers in Jiangsu Province, we explore the influence of aging on the management of fruit trees and further introduce fruit tree management into the production function to analyze the effects of different orchard management methods on fertilizer efficiency. The results show that with the increase of labor force age, although the total labor investment of aged farmer households has somewhat increased, significant differences exist in the distribution of labor investment between the different production processes due to the different labor demands from the various production processes. In technical stages that demand good physical capabilities, such as pruning and flower/fruit thinning, elderly farmers have significantly reduced labor investment than younger ones, and this relative shortfall further reduces the marginal output of their chemical and organic fertilizers. Foreseeably, the aging of the rural labor force will have a negative impact on the efficiency of chemical and other fertilizers, cost-cutting, and profit-making in the fruit and nut industries, which have the same management methods for pruning and flower (fruit thinning. Therefore, this paper offers relevant policy recommendations for the optimization of production tools, expansion of operation scale, and development of socialized services for the fruit industry, etc.
Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S
Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.
Peng, Tao; Wang, Wei; Rohde, Gustavo K; Murphy, Robert F
Biological shape modeling is an essential task that is required for systems biology efforts to simulate complex cell behaviors. Statistical learning methods have been used to build generative shape models based on reconstructive shape parameters extracted from microscope image collections. However, such parametric modeling approaches are usually limited to simple shapes and easily-modeled parameter distributions. Moreover, to maximize the reconstruction accuracy, significant effort is required to design models for specific datasets or patterns. We have therefore developed an instance-based approach to model biological shapes within a shape space built upon diffeomorphic measurement. We also designed a recursive interpolation algorithm to probabilistically synthesize new shape instances using the shape space model and the original instances. The method is quite generalizable and therefore can be applied to most nuclear, cell and protein object shapes, in both 2D and 3D.
Marian, Nicolae; Top, Søren
to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...
Peltier, W. R.; Vettoretti, G.; Argus, D. F.
Global models of the glacial isostatic adjustment (GIA) process are designed to fit a wide range of geophysical and geomorphological observations that simultaneously constrain the internal viscoelastic structure of Earths interior and the history of grounded ice thickness variations that has occurred over the most recent ice-age cycle of the Late Quaternary interval of time. The most recent refinement of the ICE-NG (VMX) series of such global models from the University of Toronto, ICE-6G_C (VM5a), has recently been slightly modified insofar as its Antarctic component is concerned to produce a "_D" version of the structure. This has been chosen to provide the boundary conditions for the next round of model-data inter-comparisons in the context of the international Paleoclimate Modeling Inter-comparison Project (PMIP). The output of PMIP will contribute to the Sixth Assessment Report (AR6) of the Intergovernmental Panel on Climate Change which is now under way. A highly significant test of the utility of this latest model has recently been performed that is focused upon the Dansgaard-Oeschger oscillation that was the primary source of climate variability during Marine Isotope Stage 3 (MIS3) of the most recent glacial cycle. By introducing the surface boundary conditions for paleotopography and paleobathymetry, land-sea mask and surface albedo into the NCAR CESM1 coupled climate model configured at full one degree by one degree CMIP5 resolution, together with the appropriate trace gas and orbital insolation forcing, we show that the millennium timescale Dansgard-Oeschger oscillation naturally develops following spin- up of the model into the glacial state.
National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...
Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for
Universities. The book continues by describing, analyzing and showing how NEWGIBM was implemented in SMEs in different industrial companies/networks. Based on this effort, the researchers try to describe and analyze the current context, experience of NEWGIBM and finally the emerging scenarios of NEWGIBM...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020...
Wan, Jiang; Zabaras, Nicholas
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media
Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim
The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...
The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...
Witus, Gary; Weathersby, Marshall
Visual target discrimination has occurred when the observer can say "I see a target THERE!" and can designate the target location. Target discrimination occurs when a perceived shape is sufficiently similar one or more of the instances the observer has been trained on. Marr defined vision as "knowing what is where by seeing." Knowing "what" requires prior knowledge. Target discrimination requires model-based visual processing. Model-based signature metrics attempt to answer the question "to what extent does the target in the image resemble a training image?" Model-based signature metrics attempt to represent the effects of high-level top-down visual cognition, in addition to low-level bottom-up effects. Recent advances in realistic 3D target rendering and computer-vision object recognition have made model-based signature metrics more practical. The human visual system almost certainly does NOT use the same processing algorithms as computer vision object recognition, but some processing elements and the overall effects are similar. It remains to be determined whether model-based metrics explain the variance in human performance. The purpose of this paper is to explain and illustrate the model-based approach to signature metrics.
Full Text Available We investigate the use of an extension of rule-based modelling for cellular signalling to create a structured space of model variants. This enables the incremental development of rule sets that start from simple mechanisms and which, by a gradual increase in agent and rule resolution, evolve into more detailed descriptions.
Gurbuz, Havva Gulay; Tekinerdogan, Bedir
Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Augustijn-Beckers, Petronella; Doldersum, Tom; Useya, Juliana; Augustijn, Dionysius C.M.
This paper introduces a spatially explicit agent-based simulation model for micro-scale cholera diffusion. The model simulates both an environmental reservoir of naturally occurring V.cholerae bacteria and hyperinfectious V. cholerae. Objective of the research is to test if runoff from open refuse
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...
Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.
Full Text Available With such excellent performance as nonlinear stiffness, adjustable vehicle height, and good vibration resistance, hydropneumatic suspension (HS has been more and more applied to heavy vehicle and engineering vehicle. Traditional modeling methods are still confined to simple models without taking many factors into consideration. A hydropneumatic suspension model based on fractional order (HSM-FO is built with the advantage of fractional order (FO in viscoelastic material modeling considering the mechanics property of multiphase medium of HS. Then, the detailed calculation method is proposed based on Oustaloup filtering approximation algorithm. The HSM-FO is implemented in Matlab/Simulink, and the results of comparison among the simulation curve of fractional order, integral order, and the curve of real experiment prove the feasibility and validity of HSM-FO. The damping force property of the suspension system under different fractional orders is also studied. In the end of this paper, several conclusions concerning HSM-FO are drawn according to analysis of simulation.
The bystander effect model of Brenner and Sachs fitted to lung cancer data in 11 cohorts of underground miners, and equivalence of fit of a linear relative risk model with adjustment for attained age and age at exposure
Little, M P
Bystander effects following exposure to α-particles have been observed in many experimental systems, and imply that linearly extrapolating low dose risks from high dose data might materially underestimate risk. Brenner and Sachs (2002 Int. J. Radiat. Biol. 78 593-604; 2003 Health Phys. 85 103-8) have recently proposed a model of the bystander effect which they use to explain the inverse dose rate effect observed for lung cancer in underground miners exposed to radon daughters. In this paper we fit the model of the bystander effect proposed by Brenner and Sachs to 11 cohorts of underground miners, taking account of the covariance structure of the data and the period of latency between the development of the first pre-malignant cell and clinically overt cancer. We also fitted a simple linear relative risk model, with adjustment for age at exposure and attained age. The methods that we use for fitting both models are different from those used by Brenner and Sachs, in particular taking account of the covariance structure, which they did not, and omitting certain unjustifiable adjustments to the miner data. The fit of the original model of Brenner and Sachs (with 0 y period of latency) is generally poor, although it is much improved by assuming a 5 or 6 y period of latency from the first appearance of a pre-malignant cell to cancer. The fit of this latter model is equivalent to that of a linear relative risk model with adjustment for age at exposure and attained age. In particular, both models are capable of describing the observed inverse dose rate effect in this data set
Knudsen, Thomas; Andersen, Rune Carbuhn
We present a filtering method for digital terrain models (DTMs). The method is based on mathematical morphological filtering within gradient (slope) defined domains. The intention with the filtering procedure is to improbé the cartographic quality of height contours generated from a DTM based on ...
E. Quaeghebeur (Erik); G. de Cooman; F. Hermans (Felienne)
textabstractWe develop a framework for modelling and reasoning with uncertainty based on accept and reject statements about gambles. It generalises the frameworks found in the literature based on statements of acceptability, desirability, or favourability and clarifies their relative position. Next
Sasgen, Ingo; Martín-Español, Alba; Horvath, Alexander; Klemann, Volker; Petrie, Elizabeth J.; Wouters, Bert; Horwath, Martin; Pail, Roland; Bamber, Jonathan L.; Clarke, Peter J.; Konrad, Hannes; Wilson, Terry; Drinkwater, Mark R.
The poorly known correction for the ongoing deformation of the solid Earth caused by glacial isostatic adjustment (GIA) is a major uncertainty in determining the mass balance of the Antarctic ice sheet from measurements of satellite gravimetry and to a lesser extent satellite altimetry. In the past decade, much progress has been made in consistently modeling ice sheet and solid Earth interactions; however, forward-modeling solutions of GIA in Antarctica remain uncertain due to the sparsity of constraints on the ice sheet evolution, as well as the Earth's rheological properties. An alternative approach towards estimating GIA is the joint inversion of multiple satellite data - namely, satellite gravimetry, satellite altimetry and GPS, which reflect, with different sensitivities, trends in recent glacial changes and GIA. Crucial to the success of this approach is the accuracy of the space-geodetic data sets. Here, we present reprocessed rates of surface-ice elevation change (Envisat/Ice, Cloud,and land Elevation Satellite, ICESat; 2003-2009), gravity field change (Gravity Recovery and Climate Experiment, GRACE; 2003-2009) and bedrock uplift (GPS; 1995-2013). The data analysis is complemented by the forward modeling of viscoelastic response functions to disc load forcing, allowing us to relate GIA-induced surface displacements with gravity changes for different rheological parameters of the solid Earth. The data and modeling results presented here are available in the PANGAEA database (https://doi.org/10.1594/PANGAEA.875745" target="_blank">https://doi.org/10.1594/PANGAEA.875745). The data sets are the input streams for the joint inversion estimate of present-day ice-mass change and GIA, focusing on Antarctica. However, the methods, code and data provided in this paper can be used to solve other problems, such as volume balances of the Antarctic ice sheet, or can be applied to other geographical regions in the case of the viscoelastic response functions. This paper
Tokuda, T; Jaakkola, H; Yoshida, N
Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin
Halle, Lars Halvard; Nicaise, Johannes
Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....
....310 Risk adjustment data. (a) Definition of risk adjustment data. Risk adjustment data are all data that are used in the development and application of a risk adjustment payment model. (b) Data... 42 Public Health 3 2010-10-01 2010-10-01 false Risk adjustment data. 422.310 Section 422.310...
Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.
Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir
The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.
the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.
Full Text Available This paper explores and ranks the key performance indicators of multi-criteria decision-making in the process of selecting renewable energy sources (RES. Different categories of factors (e.g., political, legal, technological, economic and financial, sociocultural, and physical are crucial for the analysis of such projects. In this paper, we apply the fuzzy analytic hierarchy process (fuzzy AHP method—a mathematical method—in order to analyze the main criteria for such projects, which include the environment, the organizational management structure, project participants, and participants’ relationship with the performance indicators. In order of ranking, the indicators are the following: time, costs, quality, monitoring the project’s sustainability, user feedback, and users’ health and safety. The aim of this paper is to point out the necessity of creating an adjustable model for renewable energy projects in order to proceed with the sustainable development of the southeast part of Serbia. This model should lead the creation process for such a project, with the aim of increasing its energy efficiency.
Full Text Available This paper deals with a novel scheme for microclimate control in historical exhibition rooms, inhibiting moisture sorption phenomena that are inadmissible from the preventive conservation point of view. The impact of air humidity is the most significant harmful exposure for a great deal of the cultural heritage deposited in remote historical buildings. Leaving the interior temperature to run almost its spontaneous yearly cycle, the proposed non-linear model-based control protects exhibits from harmful variations in moisture content by compensating the temperature drifts with an adequate adjustment of the air humidity. Already implemented in a medieval interior since 1999, the proposed microclimate control has proved capable of permanently maintaining constant a desirable moisture content in organic or porous materials in the interior of a building.
Chang, Dah-Chung; Wu, Wen-Rong
The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt's Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm
Su, Jianxun; Lu, Yao; Zhang, Hui; Li, Zengrui; (Lamar) Yang, Yaoqing; Che, Yongxing; Qi, Kainan
In this paper, an ultra-wideband, wide angle and polarization-insensitive metasurface is designed, fabricated, and characterized for suppressing the specular electromagnetic wave reflection or backward radar cross section (RCS). Square ring structure is chosen as the basic meta-atoms. A new physical mechanism based on size adjustment of the basic meta-atoms is proposed for ultra-wideband manipulation of electromagnetic (EM) waves. Based on hybrid array pattern synthesis (APS) and particle swarm optimization (PSO) algorithm, the selection and distribution of the basic meta-atoms are optimized simultaneously to obtain the ultra-wideband diffusion scattering patterns. The metasurface can achieve an excellent RCS reduction in an ultra-wide frequency range under x- and y-polarized normal incidences. The new proposed mechanism greatly extends the bandwidth of RCS reduction. The simulation and experiment results show the metasurface can achieve ultra-wideband and polarization-insensitive specular reflection reduction for both normal and wide-angle incidences. The proposed methodology opens up a new route for realizing ultra-wideband diffusion scattering of EM wave, which is important for stealth and other microwave applications in the future.
Full Text Available Due to advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, the aerial oblique images have found their place in numerous civil applications. However, for these applications high quality orientation data are essential. A fully automatic tie-point extraction procedure is developed to precisely orient the large block of oblique aerial images, in which a refined ASIFT algorithm and a window-based multiple-viewing image matching (WMVM method are combined. In this approach, the WMVM method is based on the concept of multi-image matching guided from object space and allows reconstruction of 3D objects by matching all available images simultaneously, and a square correlation window in the reference image can be correlated with windows of different size, shape and orientation in the search images.Then another key algorithms, i.e. the combined bundle adjustment method with gross-error detection & removal algorithm, which can be used for simultaneously orient the oblique and nearly-vertical images will be presented. Finally, through the experiments by using real oblique images over several test areas, the performance and accuracy of the proposed method is studied and presented.
Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop
In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)
Nine out of 10 American adults believe Jesus was a real person, and almost two-thirds have made a commitment to Jesus Christ. Research further supports that spiritual beliefs and religious practices influence overall health and well-being. Christian nurses need a practice model that helps them serve as kingdom nurses. This article introduces the Agape Model, based on the agape love and characteristics of Christ, upon which Christian nurses may align their practice to provide Christ-centered care.
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
An adjustable microchip holder for holding a microchip is provided having a plurality of displaceable interconnection pads for connecting the connection holes of a microchip with one or more external devices or equipment. The adjustable microchip holder can fit different sizes of microchips with ...
Full Text Available The milk price from a cooperative institution to farmer does not fully cover the production cost. Though, dairy farmers encounter various risks and uncertainties in conducting their business. The highest risk in milk supply lies in the activities at the farm. This study was designed to formulate a model for calculating milk price at farmer’s level based on risk. Risks that occur on farms include the risk of cow breeding, sanitation, health care, cattle feed management, milking and milk sales. This research used the location of the farm in West Java region. There were five main stages in the preparation of this model, (1 identification and analysis of influential factors, (2 development of a conceptual model, (3 structural analysis and the amount of production costs, (4 model calculation of production cost with risk factors, and (5 risk based milk pricing model. This research built a relationship between risks on smallholder dairy farms with the production costs to be incurred by the farmers. It was also obtained the formulation of risk adjustment factor calculation for the variable costs of production in dairy cattle farm. The difference in production costs with risk and the total production cost without risk was about 8% to 10%. It could be concluded that the basic price of milk proposed based on the research was around IDR 4,250-IDR 4,350/L for 3 to 4 cows ownership. Increasing farmer income was expected to be obtained by entering the value of this risk in the calculation of production costs.
Halle, Lars Halvard; Nicaise, Johannes
Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...
Peuker, Frank; Maufroy, Christophe; Seyfarth, André
The dynamics of the center of mass (CoM) in the sagittal plane in humans and animals during running is well described by the spring-loaded inverted pendulum (SLIP). With appropriate parameters, SLIP running patterns are stable, and these models can recover from perturbations without the need for corrective strategies, such as the application of additional forces. Rather, it is sufficient to adjust the leg to a fixed angle relative to the ground. In this work, we consider the extension of the SLIP to three dimensions (3D SLIP) and investigate feed-forward strategies for leg adjustment during the flight phase. As in the SLIP model, the leg is placed at a fixed angle. We extend the scope of possible reference axes from only fixed horizontal and vertical axes to include the CoM velocity vector as a movement-related reference, resulting in six leg-adjustment strategies. Only leg-adjustment strategies that include the CoM velocity vector produced stable running and large parameter domains of stability. The ability of the model to recover from perturbations along the direction of motion (directional stability) depended on the strategy for lateral leg adjustment. Specifically, asymptotic and neutral directional stability was observed for strategies based on the global reference axis and the velocity vector, respectively. Additional features of velocity-based leg adjustment are running at arbitrary low speed (kinetic energy) and the emergence of large domains of stable 3D running that are smoothly transferred to 2D SLIP stability and even to 1D SLIP hopping. One of the additional leg-adjustment strategies represented a large convex region of parameters where stable and robust hopping and running patterns exist. Therefore, this strategy is a promising candidate for implementation into engineering applications, such as robots, for instance. In a preliminary comparison, the model predictions were in good agreement with the experimental data, suggesting that the 3D SLIP is an
Pub. Co. 1985.  Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such
Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo
Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studie...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....
Jawad Alkhateeb; Khaled Musa
The quality of software is essential to corporations in making their commercial software. Good or poorquality to software plays an important role to some systems such as embedded systems, real-time systems,and control systems that play an important aspect in human life. Software products or commercial off theshelf software are usually programmed based on a software quality model. In the software engineeringfield, each quality model contains a set of attributes or characteristics that drives i...
S. Q. Wan
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.