WorldWideScience

Sample records for multiple model method

  1. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  2. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  3. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  4. Pursuing the method of multiple working hypotheses for hydrological modeling

    Clark, M.P.; Kavetski, D.; Fenicia, F.

    2011-01-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding

  5. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  6. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  7. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  8. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt

    Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir

    2018-04-01

    Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.

  10. Towards methodical modelling: Differences between the structure and output dynamics of multiple conceptual models

    Knoben, Wouter; Woods, Ross; Freer, Jim

    2016-04-01

    Conceptual hydrologic models consist of a certain arrangement of spatial and temporal dynamics consisting of stores, fluxes and transformation functions, depending on the modeller's choices and intended use. They have the advantages of being computationally efficient, being relatively easy model structures to reconfigure and having relatively low input data demands. This makes them well-suited for large-scale and large-sample hydrology, where appropriately representing the dominant hydrologic functions of a catchment is a main concern. Given these requirements, the number of parameters in the model cannot be too high, to avoid equifinality and identifiability issues. This limits the number and level of complexity of dominant hydrologic processes the model can represent. Specific purposes and places thus require a specific model and this has led to an abundance of conceptual hydrologic models. No structured overview of these models exists and there is no clear method to select appropriate model structures for different catchments. This study is a first step towards creating an overview of the elements that make up conceptual models, which may later assist a modeller in finding an appropriate model structure for a given catchment. To this end, this study brings together over 30 past and present conceptual models. The reviewed model structures are simply different configurations of three basic model elements (stores, fluxes and transformation functions), depending on the hydrologic processes the models are intended to represent. Differences also exist in the inner workings of the stores, fluxes and transformations, i.e. the mathematical formulations that describe each model element's intended behaviour. We investigate the hypothesis that different model structures can produce similar behavioural simulations. This can clarify the overview of model elements by grouping elements which are similar, which can improve model structure selection.

  11. Study on validation method for femur finite element model under multiple loading conditions

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu

    2018-03-01

    Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.

  12. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  13. A composite state method for ensemble data assimilation with multiple limited-area models

    Matthew Kretschmer

    2015-04-01

    Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.

  14. Neutron source multiplication method

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  15. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  16. Method of experimental and theoretical modeling for multiple pressure tube rupture for RBMK reactor

    Medvedeva, N.Y.; Goldstein, R.V.; Burrows, J.A.

    2001-01-01

    The rupture of single RBMK reactor channels has occurred at a number of stations with a variety of initiating events. It is assumed in RBMK Safety Cases that the force of the escaping fluid will not cause neighbouring channels to break. This assumption has not been justified. A chain reaction of tube breaks could over-pressurise the reactor cavity leading to catastrophic failure of the containment. To validate the claims of the RBMK Safety Cases the Electrogorsk Research and Engineering Centre, in participation with experts from the Institute of Mechanics of RAS, has developed the method of interacting multiscale physical and mathematical modelling for coupled thermophysical, hydrogasodynamic processes and deformation and break processes causing and (or) accompanying potential failures, design and beyond the design RBMK reactor accidents. To realise the method the set of rigs, physical and mathematical models and specialized computer codes are under creation. This article sets out an experimental philosophy and programme for achieving this objective to solve the problem of credibility or non-credibility for multiple fuel channel rupture in RBMK.(author)

  17. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  18. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  19. Hydrostratigraphic modelling using multiple-point statistics and airborne transient electromagnetic methods

    Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie

    2017-01-01

    the incorporation of elaborate datasets and provides a framework for stochastic hydrostratigraphic modelling. This paper focuses on comparing three MPS methods: snesim, DS and iqsim. The MPS methods are tested and compared on a real-world hydrogeophysical survey from Kasted in Denmark, which covers an area of 45 km......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...

  20. Hydrologic evaluation of a Mediterranean watershed using the SWAT model with multiple PET estimation methods

    The Penman-Monteith method suggested by the Food Agricultural Organization in the Irrigation and drainage paper 56 (FAO-56 P-M) was used to evaluate surface runoff and sediment yield predictions by the Soil and Water Assessment Tool (SWAT) model at the outlet of an experimental watershed in Sicily. ...

  1. Stepwise multiple regression method of greenhouse gas emission modeling in the energy sector in Poland.

    Kolasa-Wiecek, Alicja

    2015-04-01

    The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.

  2. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  3. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  4. Multiple methods for multiple futures: Integrating qualitative scenario planning and quantitative simulation modeling for natural resource decision making

    Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.

    2017-01-01

    Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.

  5. A Waterline Extraction Method from Remote Sensing Image Based on Quad-tree and Multiple Active Contour Model

    YU Jintao

    2016-09-01

    Full Text Available After the characteristics of geodesic active contour model (GAC, Chan-Vese model(CV and local binary fitting model(LBF are analyzed, and the active contour model based on regions and edges is combined with image segmentation method based on quad-tree, a waterline extraction method based on quad-tree and multiple active contour model is proposed in this paper. Firstly, the method provides an initial contour according to quad-tree segmentation. Secondly, a new signed pressure force(SPF function based on global image statistics information of CV model and local image statistics information of LBF model has been defined, and then ,the edge stopping function(ESF is replaced by the proposed SPF function, which solves the problem such as evolution stopped in advance and excessive evolution. Finally, the selective binary and Gaussian filtering level set method is used to avoid reinitializing and regularization to improve the evolution efficiency. The experimental results show that this method can effectively extract the weak edges and serious concave edges, and owns some properties such as sub-pixel accuracy, high efficiency and reliability for waterline extraction.

  6. QSAR Modeling of COX -2 Inhibitory Activity of Some Dihydropyridine and Hydroquinoline Derivatives Using Multiple Linear Regression (MLR) Method.

    Akbari, Somaye; Zebardast, Tannaz; Zarghi, Afshin; Hajimahdi, Zahra

    2017-01-01

    COX-2 inhibitory activities of some 1,4-dihydropyridine and 5-oxo-1,4,5,6,7,8-hexahydroquinoline derivatives were modeled by quantitative structure-activity relationship (QSAR) using stepwise-multiple linear regression (SW-MLR) method. The built model was robust and predictive with correlation coefficient (R 2 ) of 0.972 and 0.531 for training and test groups, respectively. The quality of the model was evaluated by leave-one-out (LOO) cross validation (LOO correlation coefficient (Q 2 ) of 0.943) and Y-randomization. We also employed a leverage approach for the defining of applicability domain of model. Based on QSAR models results, COX-2 inhibitory activity of selected data set had correlation with BEHm6 (highest eigenvalue n. 6 of Burden matrix/weighted by atomic masses), Mor03u (signal 03/unweighted) and IVDE (Mean information content on the vertex degree equality) descriptors which derived from their structures.

  7. A physics based method for combining multiple anatomy models with application to medical simulation.

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  8. Optimization and modeling of spot welding parameters with simultaneous multiple response consideration using multi objective Taguchi method and RSM

    Muhammad, Nora Siah; Manurung Yupiter HP; Hafidzi, Moham Mad; Abas, Sun Haji Kiyai; Tham, Ghalib; Haru Man, Esa [Universiti Teknologi MARA (UiTM), Selangor (Malaysia)

    2012-08-15

    This paper presents an alternative method to optimize process parameters of resistance spot welding (RSW) towards weld zone development. The optimization approach attempts to consider simultaneously the multiple quality characteristics, namely weld nugget and heat affected zone (HAZ), using multi objective Taguchi method (MTM). The experimental study was conducted for plate thickness of 1.5mm under different welding current, weld time and hold time. The optimum welding parameters were investigated using the Taguchi method with L9 orthogonal array. The optimum value was analyzed by means of MTM, which involved the calculation of total normalized quality loss (TNQL) and multi signal to noise ratio (MSNR). A significant level of the welding parameters was further obtained by using analysis of variance (ANOVA). Furthermore, the first order model for predicting the weld zone development is derived by using response surface methodology (RSM). Based on the experimental confirmation test, the proposed method can be effectively applied to estimate the size of weld zone, which can be used to enhance and optimized the welding performance in RSW or other application.

  9. Optimization and modeling of spot welding parameters with simultaneous multiple response consideration using multi objective Taguchi method and RSM

    Muhammad, Nora Siah; Manurung Yupiter HP; Hafidzi, Moham Mad; Abas, Sun Haji Kiyai; Tham, Ghalib; Haru Man, Esa

    2012-01-01

    This paper presents an alternative method to optimize process parameters of resistance spot welding (RSW) towards weld zone development. The optimization approach attempts to consider simultaneously the multiple quality characteristics, namely weld nugget and heat affected zone (HAZ), using multi objective Taguchi method (MTM). The experimental study was conducted for plate thickness of 1.5mm under different welding current, weld time and hold time. The optimum welding parameters were investigated using the Taguchi method with L9 orthogonal array. The optimum value was analyzed by means of MTM, which involved the calculation of total normalized quality loss (TNQL) and multi signal to noise ratio (MSNR). A significant level of the welding parameters was further obtained by using analysis of variance (ANOVA). Furthermore, the first order model for predicting the weld zone development is derived by using response surface methodology (RSM). Based on the experimental confirmation test, the proposed method can be effectively applied to estimate the size of weld zone, which can be used to enhance and optimized the welding performance in RSW or other application

  10. Multiple model analysis with discriminatory data collection (MMA-DDC): A new method for improving measurement selection

    Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.

    2011-12-01

    Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.

  11. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  12. MUSIDH, multiple use of simulated demographic histories, a novel method to reduce computation time in microsimulation models of infectious diseases.

    Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F

    2008-09-01

    Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.

  13. Multiple predictor smoothing methods for sensitivity analysis

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  14. Multiple predictor smoothing methods for sensitivity analysis.

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  15. Geometrical model of multiple production

    Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.

    1988-01-01

    The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given

  16. Why Did the Bear Cross the Road? Comparing the Performance of Multiple Resistance Surfaces and Connectivity Modeling Methods

    Samuel A. Cushman

    2014-12-01

    Full Text Available There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway crossings by American black bears in the northern Rocky Mountains, USA. We found that a resistance surface derived directly from movement data greatly outperformed a resistance surface produced from analysis of genetic differentiation, despite their heuristic similarities. Our analysis also suggested differences in the performance of different connectivity modeling approaches. Factorial least cost paths appeared to slightly outperform other methods on the movement-derived resistance surface, but had very poor performance on the resistance surface obtained from multi-model landscape genetic analysis. Cumulative resistant kernels appeared to offer the best combination of high predictive performance and sensitivity to differences in resistance surface parameterization. Our analysis highlights that even when two resistance surfaces include the same variables and have a high spatial correlation of resistance values, they may perform very differently in predicting animal movement and population connectivity.

  17. Multiple Linear Regression Modeling To Predict the Stability of Polymer-Drug Solid Dispersions: Comparison of the Effects of Polymers and Manufacturing Methods on Solid Dispersion Stability.

    Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J

    2018-03-29

    Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule.

  18. Modeling Methods

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  19. Analysis and Modeling for China’s Electricity Demand Forecasting Using a Hybrid Method Based on Multiple Regression and Extreme Learning Machine: A View from Carbon Emission

    Yi Liang

    2016-11-01

    Full Text Available The power industry is the main battlefield of CO2 emission reduction, which plays an important role in the implementation and development of the low carbon economy. The forecasting of electricity demand can provide a scientific basis for the country to formulate a power industry development strategy and further promote the sustained, healthy and rapid development of the national economy. Under the goal of low-carbon economy, medium and long term electricity demand forecasting will have very important practical significance. In this paper, a new hybrid electricity demand model framework is characterized as follows: firstly, integration of grey relation degree (GRD with induced ordered weighted harmonic averaging operator (IOWHA to propose a new weight determination method of hybrid forecasting model on basis of forecasting accuracy as induced variables is presented; secondly, utilization of the proposed weight determination method to construct the optimal hybrid forecasting model based on extreme learning machine (ELM forecasting model and multiple regression (MR model; thirdly, three scenarios in line with the level of realization of various carbon emission targets and dynamic simulation of effect of low-carbon economy on future electricity demand are discussed. The resulting findings show that, the proposed model outperformed and concentrated some monomial forecasting models, especially in boosting the overall instability dramatically. In addition, the development of a low-carbon economy will increase the demand for electricity, and have an impact on the adjustment of the electricity demand structure.

  20. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems.

    Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P

    1999-10-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.

  1. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems

    Garcia, F.; Manso, M.V.; Rodriguez, O.; Mesa, J.; Arruda-Neto, J.D.T.; Helene, O.M.; Vanin, V.R.; Likhachev, V.P.; Pereira Filho, J.W.; Deppman, A.; Perez, G.; Guzman, F.; Camargo, S.P. de

    1999-01-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data. (author)

  2. A multiple regression method for genomewide association studies ...

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  3. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  4. Why did the bear cross the road? Comparing the performance of multiple resistance surfaces and connectivity modeling methods

    Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth

    2014-01-01

    There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway...

  5. Optimization of breeding methods when introducing multiple ...

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  6. In house validation of a high resolution mass spectrometry Orbitrap-based method for multiple allergen detection in a processed model food.

    Pilolli, Rosa; De Angelis, Elisabetta; Monaci, Linda

    2018-02-13

    In recent years, mass spectrometry (MS) has been establishing its role in the development of analytical methods for multiple allergen detection, but most analyses are being carried out on low-resolution mass spectrometers such as triple quadrupole or ion traps. In this investigation, performance provided by a high resolution (HR) hybrid quadrupole-Orbitrap™ MS platform for the multiple allergens detection in processed food matrix is presented. In particular, three different acquisition modes were compared: full-MS, targeted-selected ion monitoring with data-dependent fragmentation (t-SIM/dd2), and parallel reaction monitoring. In order to challenge the HR-MS platform, the sample preparation was kept as simple as possible, limited to a 30-min ultrasound-aided protein extraction followed by clean-up with disposable size exclusion cartridges. Selected peptide markers tracing for five allergenic ingredients namely skim milk, whole egg, soy flour, ground hazelnut, and ground peanut were monitored in home-made cookies chosen as model processed matrix. Timed t-SIM/dd2 was found the best choice as a good compromise between sensitivity and accuracy, accomplishing the detection of 17 peptides originating from the five allergens in the same run. The optimized method was validated in-house through the evaluation of matrix and processing effects, recoveries, and precision. The selected quantitative markers for each allergenic ingredient provided quantification of 60-100 μg ingred /g allergenic ingredient/matrix in incurred cookies.

  7. Hybrid multiple criteria decision-making methods

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  8. Multiple Shooting and Time Domain Decomposition Methods

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  9. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  10. Multivariate analysis: models and method

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  11. Multiple time-scale methods in particle simulations of plasmas

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  12. Multiple Indicator Stationary Time Series Models.

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  13. Modeling Multiple Causes of Carcinogenesis

    Jones, T D

    1999-01-24

    An array of epidemiological results and databases on test animal indicate that risk of cancer and atherosclerosis can be up- or down-regulated by diet through a range of 200%. Other factors contribute incrementally and include the natural terrestrial environment and various human activities that jointly produce complex exposures to endotoxin-producing microorganisms, ionizing radiations, and chemicals. Ordinary personal habits and simple physical irritants have been demonstrated to affect the immune response and risk of disease. There tends to be poor statistical correlation of long-term risk with single agent exposures incurred throughout working careers. However, Agency recommendations for control of hazardous exposures to humans has been substance-specific instead of contextually realistic even though there is consistent evidence for common mechanisms of toxicological and carcinogenic action. That behavior seems to be best explained by molecular stresses from cellular oxygen metabolism and phagocytosis of antigenic invasion as well as breakdown of normal metabolic compounds associated with homeostatic- and injury-related renewal of cells. There is continually mounting evidence that marrow stroma, comprised largely of monocyte-macrophages and fibroblasts, is important to phagocytic and cytokinetic response, but the complex action of the immune process is difficult to infer from first-principle logic or biomarkers of toxic injury. The many diverse database studies all seem to implicate two important processes, i.e., the univalent reduction of molecular oxygen and breakdown of aginuine, an amino acid, by hydrolysis or digestion of protein which is attendant to normal antigen-antibody action. This behavior indicates that protection guidelines and risk coefficients should be context dependent to include reference considerations of the composite action of parameters that mediate oxygen metabolism. A logic of this type permits the realistic common-scale modeling of

  14. Method for measuring multiple scattering corrections between liquid scintillators

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  15. Multiple Model Approaches to Modelling and Control,

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  16. Experiences of nurse practitioners and medical practitioners working in collaborative practice models in primary healthcare in Australia - a multiple case study using mixed methods.

    Schadewaldt, Verena; McInnes, Elizabeth; Hiller, Janet E; Gardner, Anne

    2016-07-29

    In 2010 policy changes were introduced to the Australian healthcare system that granted nurse practitioners access to the public health insurance scheme (Medicare) subject to a collaborative arrangement with a medical practitioner. These changes facilitated nurse practitioner practice in primary healthcare settings. This study investigated the experiences and perceptions of nurse practitioners and medical practitioners who worked together under the new policies and aimed to identify enablers of collaborative practice models. A multiple case study of five primary healthcare sites was undertaken, applying mixed methods research. Six nurse practitioners, 13 medical practitioners and three practice managers participated in the study. Data were collected through direct observations, documents and semi-structured interviews as well as questionnaires including validated scales to measure the level of collaboration, satisfaction with collaboration and beliefs in the benefits of collaboration. Thematic analysis was undertaken for qualitative data from interviews, observations and documents, followed by deductive analysis whereby thematic categories were compared to two theoretical models of collaboration. Questionnaire responses were summarised using descriptive statistics. Using the scale measurements, nurse practitioners and medical practitioners reported high levels of collaboration, were highly satisfied with their collaborative relationship and strongly believed that collaboration benefited the patient. The three themes developed from qualitative data showed a more complex and nuanced picture: 1) Structures such as government policy requirements and local infrastructure disadvantaged nurse practitioners financially and professionally in collaborative practice models; 2) Participants experienced the influence and consequences of individual role enactment through the co-existence of overlapping, complementary, traditional and emerging roles, which blurred perceptions of

  17. The STATFLUX code: a statistical method for calculation of flow and set of parameters, based on the Multiple-Compartment Biokinetical Model

    Garcia, F.; Mesa, J.; Arruda-Neto, J. D. T.; Helene, O.; Vanin, V.; Milian, F.; Deppman, A.; Rodrigues, T. E.; Rodriguez, O.

    2007-03-01

    The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo simulation procedure. Program summaryTitle of program:STATFLUX Catalogue identifier:ADYS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it has been tested:Micro-computer with Intel Pentium III, 3.0 GHz Installation:Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, Brazil Operating system:Windows 2000 and Windows XP Programming language used:Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program. Memory required to execute with typical data:8 Mbytes of RAM memory and 100 MB of Hard disk memory No. of bits in a word:16 No. of lines in distributed program, including test data, etc.:6912 No. of bytes in distributed program, including test data, etc.:229 541 Distribution format:tar.gz Nature of the physical problem:The investigation of transport mechanisms for

  18. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  19. Case studies: Soil mapping using multiple methods

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  20. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  1. The Multiple Intelligences Teaching Method and Mathematics ...

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  2. Walking path-planning method for multiple radiation areas

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  3. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  4. A crack growth evaluation method for interacting multiple cracks

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  5. Multiplicity Control in Structural Equation Modeling

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  6. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. Multiple predictor smoothing methods for sensitivity analysis: Example results

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  8. SDG and qualitative trend based model multiple scale validation

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  9. Measurement of subcritical multiplication by the interval distribution method

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  10. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  11. Basic thinking patterns and working methods for multiple DFX

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  12. Computationally efficient dynamic modeling of robot manipulators with multiple flexible-links using acceleration-based discrete time transfer matrix method

    Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias

    2018-01-01

    This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations, and then is linea......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations......, and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... manipulators, and only involves calculating and transferring component/element dynamic equations that have small size. The numerical simulations and experimental testing of flexible-link manipulators are conducted to validate the proposed methodologies....

  13. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  14. Design of Xen Hybrid Multiple Police Model

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  15. On multiple level-set regularization methods for inverse problems

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  16. Multiple model cardinalized probability hypothesis density filter

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  17. Multiple network interface core apparatus and method

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  18. Multiple tag labeling method for DNA sequencing

    Mathies, R.A.; Huang, X.C.; Quesada, M.A.

    1995-07-25

    A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.

  19. Hesitant fuzzy methods for multiple criteria decision analysis

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  20. Multiple time scale methods in tokamak magnetohydrodynamics

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  1. Predictive performance models and multiple task performance

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  2. Parametric modeling for damped sinusoids from multiple channels

    Zhou, Zhenhua; So, Hing Cheung; Christensen, Mads Græsbøll

    2013-01-01

    frequencies and damping factors are then computed with the multi-channel weighted linear prediction method. The estimated sinusoidal poles are then matched to each channel according to the extreme value theory of distribution of random fields. Simulations are performed to show the performance advantages......The problem of parametric modeling for noisy damped sinusoidal signals from multiple channels is addressed. Utilizing the shift invariance property of the signal subspace, the number of distinct sinusoidal poles in the multiple channels is first determined. With the estimated number, the distinct...... of the proposed multi-channel sinusoidal modeling methodology compared with existing methods....

  3. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  4. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  5. Methods for monitoring multiple gene expression

    Berka, Randy [Davis, CA; Bachkirova, Elena [Davis, CA; Rey, Michael [Davis, CA

    2012-05-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  6. Methods for monitoring multiple gene expression

    Berka, Randy; Bachkirova, Elena; Rey, Michael

    2013-10-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  7. Use of Multiple Linear Regression Method for Modelling Seasonal Changes in Stable Isotopes of 18O and 2H in 30 Pouns in Gilan Province

    M.A. Mousavi Shalmani

    2014-08-01

    Full Text Available In order to assessment of water quality and characterize seasonal variation in 18O and 2H in relation with different chemical and physiographical parameters and modelling of effective parameters, an study was conducted during 2010 to 2011 in 30 different ponds in the north of Iran. Samples were collected at three different seasons and analysed for chemical and isotopic components. Data shows that highest amounts of δ18O and δ2H were recorded in the summer (-1.15‰ and -12.11‰ and the lowest amounts were seen in the winter (-7.50‰ and -47.32‰ respectively. Data also reveals that there is significant increase in d-excess during spring and summer in ponds 20, 21, 22, 24, 25 and 26. We can conclude that residual surface runoff (from upper lands is an important source of water to transfer soluble salts in to these ponds. In this respect, high retention time may be the main reason for movements of light isotopes in to the ponds. This has led d-excess of pond 12 even greater in summer than winter. This could be an acceptable reason for ponds 25 and 26 (Siyahkal county with highest amount of d-excess and lowest amounts of δ18O and δ2H. It seems light water pumped from groundwater wells with minor source of salt (originated from sea deep percolation in to the ponds, could may be another reason for significant decrease in the heavy isotopes of water (18O and 2H for ponds 2, 12, 14 and 25 from spring to summer. Overall conclusion of multiple linear regression test indicate that firstly from 30 variables (under investigation only a few cases can be used for identifying of changes in 18O and 2H by applications. Secondly, among the variables (studied, phytoplankton content was a common factor for interpretation of 18O and 2H during spring and summer, and also total period (during a year. Thirdly, the use of water in the spring was recommended for sampling, for 18O and 2H interpretation compared with other seasons. This is because of function can be

  8. Explaining clinical behaviors using multiple theoretical models

    Eccles Martin P

    2012-10-01

    the five surveys. For the predictor variables, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. Conclusions We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.

  9. Fuzzy multiple objective decision making methods and applications

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  10. Fuzzy multiple attribute decision making methods and applications

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  11. Explaining clinical behaviors using multiple theoretical models.

    Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie

    2012-10-17

    , the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.

  12. Adaptive Active Noise Suppression Using Multiple Model Switching Strategy

    Quanzhen Huang

    2017-01-01

    Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.

  13. Model Correction Factor Method

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  14. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  15. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  16. Application of Multiple Evaluation Models in Brazil

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  17. Multiple histogram method and static Monte Carlo sampling

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  18. Medicare capitation model, functional status, and multiple comorbidities: model accuracy

    Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena

    2012-01-01

    Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646

  19. Multiple model adaptive control with mixing

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  20. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  1. TRAC methods and models

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  2. Double-multiple streamtube model for Darrieus in turbines

    Paraschivoiu, I.

    1981-01-01

    An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.

  3. A MULTIPLE TESTING OF THE ABC METHOD AND THE DEVELOPMENT OF A SECOND-GENERATION MODEL. PART II, TEST RESULTS AND AN ANALYSIS OF RECALL RATIO.

    ALTMANN, BERTHOLD

    AFTER A BRIEF SUMMARY OF THE TEST PROGRAM (DESCRIBED MORE FULLY IN LI 000 318), THE STATISTICAL RESULTS TABULATED AS OVERALL "ABC (APPROACH BY CONCEPT)-RELEVANCE RATIOS" AND "ABC-RECALL FIGURES" ARE PRESENTED AND REVIEWED. AN ABSTRACT MODEL DEVELOPED IN ACCORDANCE WITH MAX WEBER'S "IDEALTYPUS" ("DIE OBJEKTIVITAET…

  4. Examining the interrelationships among students' personological characteristics, attitudes toward the Unified Modeling Language, self-efficacy, and multiple intelligences with respect to student achievement in a software design methods course

    Stewart-Iles, Gail Marie

    The purpose of this study was to investigate the interrelationships among student's demographics, attitudes toward the Unified Modeling Language (UML), general self-efficacy, and multiple intelligence (MI) profiles, and the use of UML to develop software. The dependent measures were course grades and course project scores. The study was grounded in problem solving theory, self-efficacy theory, and multiple intelligence theory. The sample was an intact class of 18 students who took the junior-level Software Design Methods course, CSE 3421, at Florida Institute of Technology in the Spring 2008 semester. The course incorporated instruction in UML with Java. Attitudes were measured by a researcher-modified instrument derived from the Computer Laboratory Survey by Newby and Fisher, and self-efficacy was measured by the Generalized Self-Efficacy Scale developed by Schwarzer and Jerusalem. MI profiles, which were the proportion of Gardner's eight intelligences, were determined from Shearer's Multiple Intelligence Developmental Assessment Scales. Results from a hierarchical multiple regression analysis showed that only the collective set of MI profiles was significant, but none of the individual intelligences were significant. The study's findings supported what one would expect to find relative to problem solving theory, but were contradictory to self-efficacy theory. The findings also supported Gardner's concept that multiple intelligences must be considered as an integral unit and the importance of not focusing on an individual intelligence. The findings imply that self-efficacy is not a major consideration for a software design methods class that requires a transition to problem solving strategy and suggest that the instructor was instrumental in fostering positive attitudes toward UML. Recommendations for practice include (1) teachers should not be concerned with focusing on a single intelligence simply because they believe one intelligence might be more aligned to a

  5. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  6. Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa

    Edy Legowo

    2017-01-01

    Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilita...

  7. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  8. Research on neutron source multiplication method in nuclear critical safety

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  9. Multiple instance learning tracking method with local sparse representation

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  10. Structural model analysis of multiple quantitative traits.

    Renhua Li

    2006-07-01

    Full Text Available We introduce a method for the analysis of multilocus, multitrait genetic data that provides an intuitive and precise characterization of genetic architecture. We show that it is possible to infer the magnitude and direction of causal relationships among multiple correlated phenotypes and illustrate the technique using body composition and bone density data from mouse intercross populations. Using these techniques we are able to distinguish genetic loci that affect adiposity from those that affect overall body size and thus reveal a shortcoming of standardized measures such as body mass index that are widely used in obesity research. The identification of causal networks sheds light on the nature of genetic heterogeneity and pleiotropy in complex genetic systems.

  11. Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa

    Edy Legowo

    2017-03-01

    Full Text Available Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilitasi guru dapat menstimulasi multiple intelligences siswa. Evaluasi hasil belajar siswa dari pandangan penerapan teori multiple intelligences seharusnya dilakukan menggunakan authentic assessment dan portofolio yang lebih memfasilitasi para siswa mengungkapkan atau mengaktualisasikan hasil belajarnya melalui berbagai cara sesuai dengan kekuatan jenis inteligensinya.

  12. Symbolic interactionism as a theoretical perspective for multiple method research.

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  13. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  14. Multiple Temperature Model for Near Continuum Flows

    XU, Kun; Liu, Hongwei; Jiang, Jianzheng

    2007-01-01

    In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime

  15. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto-Pérez, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided.

  16. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided

  17. Longitudinal comparative evaluation of the equivalence of an integrated peer-support and clinical staffing model for residential mental health rehabilitation: a mixed methods protocol incorporating multiple stakeholder perspectives.

    Parker, Stephen; Dark, Frances; Newman, Ellie; Korman, Nicole; Meurk, Carla; Siskind, Dan; Harris, Meredith

    2016-06-02

    A novel staffing model integrating peer support workers and clinical staff within a unified team is being trialled at community based residential rehabilitation units in Australia. A mixed-methods protocol for the longitudinal evaluation of the outcomes, expectations and experiences of care by consumers and staff under this staffing model in two units will be compared to one unit operating a traditional clinical staffing. The study is unique with regards to the context, the longitudinal approach and consideration of multiple stakeholder perspectives. The longitudinal mixed methods design integrates a quantitative evaluation of the outcomes of care for consumers at three residential rehabilitation units with an applied qualitative research methodology. The quantitative component utilizes a prospective cohort design to explore whether equivalent outcomes are achieved through engagement at residential rehabilitation units operating integrated and clinical staffing models. Comparative data will be available from the time of admission, discharge and 12-month period post-discharge from the units. Additionally, retrospective data for the 12-month period prior to admission will be utilized to consider changes in functioning pre and post engagement with residential rehabilitation care. The primary outcome will be change in psychosocial functioning, assessed using the total score on the Health of the Nation Outcome Scales (HoNOS). Planned secondary outcomes will include changes in symptomatology, disability, recovery orientation, carer quality of life, emergency department presentations, psychiatric inpatient bed days, and psychological distress and wellbeing. Planned analyses will include: cohort description; hierarchical linear regression modelling of the predictors of change in HoNOS following CCU care; and descriptive comparisons of the costs associated with the two staffing models. The qualitative component utilizes a pragmatic approach to grounded theory, with

  18. Feedback structure based entropy approach for multiple-model estimation

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  19. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  20. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  1. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  2. Correction of measured multiplicity distributions by the simulated annealing method

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  3. System and method for image registration of multiple video streams

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  4. A tactical supply chain planning model with multiple flexibility options

    Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph

    2016-01-01

    Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...

  5. Statistics of electron multiplication in multiplier phototube: iterative method

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  6. Statistics of electron multiplication in a multiplier phototube; Iterative method

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  7. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  8. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  9. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  10. Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation

    Li, Muxingzi

    2017-01-01

    of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous

  11. Geometric calibration method for multiple head cone beam SPECT systems

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  12. Galerkin projection methods for solving multiple related linear systems

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  13. A survey of real face modeling methods

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  14. A multiple relevance feedback strategy with positive and negative models.

    Yunlong Ma

    Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.

  15. Testing for Nonuniform Differential Item Functioning with Multiple Indicator Multiple Cause Models

    Woods, Carol M.; Grimm, Kevin J.

    2011-01-01

    In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…

  16. A novel method for producing multiple ionization of noble gas

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  17. Multiplicity distributions in the dual parton model

    Batunin, A.V.; Tolstenkov, A.N.

    1985-01-01

    Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account

  18. A level set method for multiple sclerosis lesion segmentation.

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Measuring multiple residual-stress components using the contour method and multiple cuts

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  20. Evolving Four Part Harmony Using a Multiple Worlds Model

    Scirea, Marco; Brown, Joseph Alexander

    2015-01-01

    This application of the Multiple Worlds Model examines a collaborative fitness model for generating four part harmonies. In this model we have multiple populations and the fitness of the individuals is based on the ability of a member from each population to work with the members of other...

  1. Rapidity correlations at fixed multiplicity in cluster emission models

    Berger, M C

    1975-01-01

    Rapidity correlations in the central region among hadrons produced in proton-proton collisions of fixed final state multiplicity n at NAL and ISR energies are investigated in a two-step framework in which clusters of hadrons are emitted essentially independently, via a multiperipheral-like model, and decay isotropically. For n>or approximately=/sup 1///sub 2/(n), these semi-inclusive distributions are controlled by the reaction mechanism which dominates production in the central region. Thus, data offer cleaner insight into the properties of this mechanism than can be obtained from fully inclusive spectra. A method of experimental analysis is suggested to facilitate the extraction of new dynamical information. It is shown that the n independence of the magnitude of semi-inclusive correlation functions reflects directly the structure of the internal cluster multiplicity distribution. This conclusion is independent of certain assumptions concerning the form of the single cluster density in rapidity space. (23 r...

  2. Explorative methods in linear models

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  3. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  4. The multiple imputation method: a case study involving secondary data analysis.

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  5. A global calibration method for multiple vision sensors based on multiple targets

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  6. Integration of multiple, excess, backup, and expected covering models

    M S Daskin; K Hogan; C ReVelle

    1988-01-01

    The concepts of multiple, excess, backup, and expected coverage are defined. Model formulations using these constructs are reviewed and contrasted to illustrate the relationships between them. Several new formulations are presented as is a new derivation of the expected covering model which indicates more clearly the relationship of the model to other multi-state covering models. An expected covering model with multiple time standards is also presented.

  7. Field evaluation of personal sampling methods for multiple bioaerosols.

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  8. Field evaluation of personal sampling methods for multiple bioaerosols.

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  9. Models and methods in thermoluminescence

    Furetta, C.

    2005-01-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  10. Models and methods in thermoluminescence

    Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)

    2005-07-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  11. Rank-based model selection for multiple ions quantum tomography

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  12. Correlation expansion: a powerful alternative multiple scattering calculation method

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  13. Coherence method of identifying signal noise model

    Vavrin, J.

    1981-01-01

    The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)

  14. A test for the parameters of multiple linear regression models ...

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  15. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  16. Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation

    Li, Muxingzi

    2017-04-24

    Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.

  17. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  18. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  19. Multiple Scenario Generation of Subsurface Models

    Cordua, Knud Skou

    of information is obeyed such that no unknown assumptions and biases influence the solution to the inverse problem. This involves a definition of the probabilistically formulated inverse problem, a discussion about how prior models can be established based on statistical information from sample models...... of the probabilistic formulation of the inverse problem. This function is based on an uncertainty model that describes the uncertainties related to the observed data. In a similar way, a formulation of the prior probability distribution that takes into account uncertainties related to the sample model statistics...... similar to observation uncertainties. We refer to the effect of these approximations as modeling errors. Examples that show how the modeling error is estimated are provided. Moreover, it is shown how these effects can be taken into account in the formulation of the posterior probability distribution...

  20. Multiple system modelling of waste management

    Eriksson, Ola; Bisaillon, Mattias

    2011-01-01

    Highlights: → Linking of models will provide a more complete, correct and credible picture of the systems. → The linking procedure is easy to perform and also leads to activation of project partners. → The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions have developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.

  1. Mean multiplicity in the Regge models with rising cross sections

    Chikovani, Z.E.; Kobylisky, N.A.; Martynov, E.S.

    1979-01-01

    Behaviour of the mean multiplicity and the total cross section σsub(t) of hadron-hadron interactions is considered in the framework of the Regge models at high energies. Generating function was plotted for models of dipole and froissaron, and the mean multiplicity and multiplicity moments were calculated. It is shown that approximately ln 2 S (energy square) in the dipole model, which is in good agreement with the experiment. It is also found that in various Regge models approximately σsub(t)lnS

  2. Discrete choice models with multiplicative error terms

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...

  3. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS

    As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...

  4. Methods of fast, multiple-point in vivo T1 determination

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  5. Multiple-lesion track-structure model

    Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.

    1992-03-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions

  6. System health monitoring using multiple-model adaptive estimation techniques

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  7. The importance of neurophysiological-Bobath method in multiple sclerosis

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  8. Affine LIBOR Models with Multiple Curves

    Grbac, Zorana; Papapantoleon, Antonis; Schoenmakers, John

    2015-01-01

    are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows us to derive Fourier pricing formulas for caps, swaptions, and basis swaptions. A model specification...... with dependent LIBOR rates is developed that allows for an efficient and accurate calibration to a system of caplet prices....

  9. A linear multiple balance method for discrete ordinates neutron transport equations

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  10. Modelling of rate effects at multiple scales

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  11. New experimental model of multiple myeloma.

    Telegin, G B; Kalinina, A R; Ponomarenko, N A; Ovsepyan, A A; Smirnov, S V; Tsybenko, V V; Homeriki, S G

    2001-06-01

    NSO/1 (P3x63Ay 8Ut) and SP20 myeloma cells were inoculated to BALB/c OlaHsd mice. NSO/1 cells allowed adequate stage-by-stage monitoring of tumor development. The adequacy of this model was confirmed in experiments with conventional cytostatics: prospidium and cytarabine caused necrosis of tumor cells and reduced animal mortality.

  12. Animal model of human disease. Multiple myeloma

    Radl, J.; Croese, J.W.; Zurcher, C.; Enden-Vieveen, M.H.M. van den; Leeuw, A.M. de

    1988-01-01

    Animal models of spontaneous and induced plasmacytomas in some inbred strains of mice have proven to be useful tools for different studies on tumorigenesis and immunoregulation. Their wide applicability and the fact that after their intravenous transplantation, the recipient mice developed bone

  13. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  14. Multiple Social Networks, Data Models and Measures for

    Magnani, Matteo; Rossi, Luca

    2017-01-01

    Multiple Social Network Analysis is a discipline defining models, measures, methodologies, and algorithms to study multiple social networks together as a single social system. It is particularly valuable when the networks are interconnected, e.g., the same actors are present in more than one...

  15. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  16. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event

  17. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  18. Explaining clinical behaviors using multiple theoretical models

    Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie

    2012-01-01

    Abstract Background In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of...

  19. Airport choice model in multiple airport regions

    Claudia Muñoz

    2017-02-01

    Full Text Available Purpose: This study aims to analyze travel choices made by air transportation users in multi airport regions because it is a crucial component when planning passenger redistribution policies. The purpose of this study is to find a utility function which makes it possible to know the variables that influence users’ choice of the airports on routes to the main cities in the Colombian territory. Design/methodology/approach: This research generates a Multinomial Logit Model (MNL, which is based on the theory of maximizing utility, and it is based on the data obtained on revealed and stated preference surveys applied to users who reside in the metropolitan area of Aburrá Valley (Colombia. This zone is the only one in the Colombian territory which has two neighboring airports for domestic flights. The airports included in the modeling process were Enrique Olaya Herrera (EOH Airport and José María Córdova (JMC Airport. Several structure models were tested, and the MNL proved to be the most significant revealing the common variables that affect passenger airport choice include the airfare, the price to travel the airport, and the time to get to the airport. Findings and Originality/value: The airport choice model which was calibrated corresponds to a valid powerful tool used to calculate the probability of each analyzed airport of being chosen for domestic flights in the Colombian territory. This is done bearing in mind specific characteristic of each of the attributes contained in the utility function. In addition, these probabilities will be used to calculate future market shares of the two airports considered in this study, and this will be done generating a support tool for airport and airline marketing policies.

  20. Multiple simultaneous event model for radiation carcinogenesis

    Baum, J.W.

    1976-01-01

    A mathematical model is proposed which postulates that cancer induction is a multi-event process, that these events occur naturally, usually one at a time in any cell, and that radiation frequently causes two of these events to occur simultaneously. Microdosimetric considerations dictate that for high LET radiations the simultaneous events are associated with a single particle or track. The model predicts: (a) linear dose-effect relations for early times after irradiation with small doses, (b) approximate power functions of dose (i.e. Dsup(x)) having exponent less than one for populations of mixed age examined at short times after irradiation with small doses, (c) saturation of effect at either long times after irradiation with small doses or for all times after irradiation with large doses, and (d) a net increase in incidence which is dependent on age of observation but independent of age at irradiation. Data of Vogel, for neutron induced mammary tumors in rats, are used to illustrate the validity of the formulation. This model provides a quantitative framework to explain several unexpected results obtained by Vogel. It also provides a logical framework to explain the dose-effect relations observed in the Japanese survivors of the atomic bombs. (author)

  1. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  2. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  3. Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.

    Muraki, Eiji

    1999-01-01

    Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…

  4. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material

  5. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  6. Analysis and application of opinion model with multiple topic interactions.

    Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng

    2017-08-01

    To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.

  7. A Fuzzy Logic Framework for Integrating Multiple Learned Models

    Hartog, Bobi Kai Den [Univ. of Nebraska, Lincoln, NE (United States)

    1999-03-01

    The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.

  8. Model for CO2 leakage including multiple geological layers and multiple leaky wells.

    Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan

    2009-02-01

    Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.

  9. Entrepreneurial intention modeling using hierarchical multiple regression

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  10. Multiple Time Series Ising Model for Financial Market Simulations

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  11. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  12. Testing effect of a drug using multiple nested models for the dose–response

    Baayen, C.; Hougaard, P.; Pipper, C. B.

    2015-01-01

    of the assumed dose–response model. Bretz et al. (2005, Biometrics 61, 738–748) suggested a combined approach, which selects one or more suitable models from a set of candidate models using a multiple comparison procedure. The method initially requires a priori estimates of any non-linear parameters...

  13. Direction of Effects in Multiple Linear Regression Models.

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  14. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  15. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  16. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  17. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  18. Graph modeling systems and methods

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  19. Correlations in multiple production on nuclei and Glauber model of multiple scattering

    Zoller, V.R.; Nikolaev, N.N.

    1982-01-01

    Critical analysis of possibility for describing correlation phenomena during multiple production on nuclei within the framework of the Glauber multiple seattering model generalized for particle production processes with Capella, Krziwinski and Shabelsky has been performed. It was mainly concluded that the suggested generalization of the Glauber model gives dependences on Ng(Np) (where Ng-the number of ''grey'' tracess, and Np-the number of protons flying out of nucleus) and, eventually, on #betta# (where #betta#-the number of intranuclear interactions) contradicting experience. Independent of choice of relation between #betta# and Ng(Np) in the model the rapidity corrletor Rsub(eta) is overstated in the central region and understated in the region of nucleus fragmentation. In mean multiplicities these two contradictions of experience are disguised with random compensation and agreement with experience in Nsub(S) (function of Ng) cannot be an argument in favour of the model. It is concluded that eiconal model doesn't permit to quantitatively describe correlation phenomena during the multiple production on nuclei

  20. ADOxx Modelling Method Conceptualization Environment

    Nesat Efendioglu

    2017-04-01

    Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.

  1. Negative binomial models for abundance estimation of multiple closed populations

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  2. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  3. Diverse methods for integrable models

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  4. Iterative method for Amado's model

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  5. Multiple attribute decision making model and application to food safety risk evaluation.

    Ma, Lihua; Chen, Hong; Yan, Huizhe; Yang, Lifeng; Wu, Lifeng

    2017-01-01

    Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  6. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  7. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  8. Model Seleksi Premi Asuransi Jiwa Dwiguna untuk Kasus Multiple Decrement

    Cita, Devi Ramana; Pane, Rolan; ', Harison

    2015-01-01

    This article discusses a select survival model for the case of multiple decrements in evaluating endowment life insurance premium for person currently aged ( + ) years, who is selected at age with ℎ years selection period. The case of multiple decrements in this case is limited to two cases. The calculation of the annual premium is done by prior evaluating of the single premium, and the present value of annuity depends on theconstant force assumption.

  9. AgMIP Training in Multiple Crop Models and Tools

    Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn

    2015-01-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.

  10. A basket two-part model to analyze medical expenditure on interdependent multiple sectors.

    Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji

    2018-05-01

    This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.

  11. Variational methods in molecular modeling

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  12. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A general framework for the evaluation of genetic association studies using multiple marginal models

    Kitsche, Andreas; Ritz, Christian; Hothorn, Ludwig A.

    2016-01-01

    OBJECTIVE: In this study, we present a simultaneous inference procedure as a unified analysis framework for genetic association studies. METHODS: The method is based on the formulation of multiple marginal models that reflect different modes of inheritance. The basic advantage of this methodology...

  14. On an efficient multiple time step Monte Carlo simulation of the SABR model

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  15. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  16. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  18. Methods for testing transport models

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  19. A collaborative scheduling model for the supply-hub with multiple suppliers and multiple manufacturers.

    Li, Guo; Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  20. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Guo Li

    2014-01-01

    Full Text Available This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  1. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104

  2. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  3. Shell model Monte Carlo methods

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  4. Shell model Monte Carlo methods

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  5. An Exact Method for the Double TSP with Multiple Stacks

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2010-01-01

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  6. An Exact Method for the Double TSP with Multiple Stacks

    Larsen, Jesper; Lusby, Richard Martin; Ehrgott, Matthias

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  7. Comparison of multiple-criteria decision-making methods - results of simulation study

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  8. Multiple commodities in statistical microeconomics: Model and market

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  9. Risk Prediction Models for Other Cancers or Multiple Sites

    Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. An extension of the multiple-trapping model

    Shkilev, V. P.

    2012-01-01

    The hopping charge transport in disordered semiconductors is considered. Using the concept of the transport energy level, macroscopic equations are derived that extend a multiple-trapping model to the case of semiconductors with both energy and spatial disorders. It is shown that, although both types of disorder can cause dispersive transport, the frequency dependence of conductivity is determined exclusively by the spatial disorder.

  11. Selecting Tools to Model Integer and Binomial Multiplication

    Pratt, Sarah Smitherman; Eddy, Colleen M.

    2017-01-01

    Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…

  12. Modeling single versus multiple systems in implicit and explicit memory.

    Starns, Jeffrey J; Ratcliff, Roger; McKoon, Gail

    2012-04-01

    It is currently controversial whether priming on implicit tasks and discrimination on explicit recognition tests are supported by a single memory system or by multiple, independent systems. In a Psychological Review article, Berry and colleagues used mathematical modeling to address this question and provide compelling evidence against the independent-systems approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Green communication: The enabler to multiple business models

    Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv

    2010-01-01

    Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers.......Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...

  14. Infinite Multiple Membership Relational Modeling for Complex Networks

    Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai

    Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...

  15. Multiplicative multifractal modeling and discrimination of human neuronal activity

    Zheng Yi; Gao Jianbo; Sanchez, Justin C.; Principe, Jose C.; Okun, Michael S.

    2005-01-01

    Understanding neuronal firing patterns is one of the most important problems in theoretical neuroscience. It is also very important for clinical neurosurgery. In this Letter, we introduce a computational procedure to examine whether neuronal firing recordings could be characterized by cascade multiplicative multifractals. By analyzing raw recording data as well as generated spike train data from 3 patients collected in two brain areas, the globus pallidus externa (GPe) and the globus pallidus interna (GPi), we show that the neural firings are consistent with a multifractal process over certain time scale range (t 1 ,t 2 ), where t 1 is argued to be not smaller than the mean inter-spike-interval of neuronal firings, while t 2 may be related to the time that neuronal signals propagate in the major neural branching structures pertinent to GPi and GPe. The generalized dimension spectrum D q effectively differentiates the two brain areas, both intra- and inter-patients. For distinguishing between GPe and GPi, it is further shown that the cascade model is more effective than the methods recently examined by Schiff et al. as well as the Fano factor analysis. Therefore, the methodology may be useful in developing computer aided tools to help clinicians perform precision neurosurgery in the operating room

  16. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  17. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  18. Vehicle coordinated transportation dispatching model base on multiple crisis locations

    Tian, Ran; Li, Shanwei; Yang, Guoying

    2018-05-01

    Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.

  19. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  20. Double-multiple streamtube model for studying vertical-axis wind turbines

    Paraschivoiu, Ion

    1988-08-01

    This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.

  1. Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization

    Raza, Wasim; Kim, Kwang-Yong

    2007-01-01

    In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem

  2. Calcium Intervention Ameliorates Experimental Model of Multiple Sclerosis

    Dariush Haghmorad

    2014-05-01

    Full Text Available Objective: Multiple sclerosis (MS is the most common inflammatory disease of the CNS. Experimental autoimmune encephalomyelitis (EAE is a widely used model for MS. In the present research, our aim was to test the therapeutic efficacy of Calcium (Ca in an experimental model of MS. Methods: In this study the experiment was done on C57BL/6 mice. EAE was induced using 200 μg of the MOG35-55 peptide emulsified in CFA and injected subcutaneously on day 0 over two flank areas. In addition, 250 ng of pertussis toxin was injected on days 0 and 2. In the treatment group, 30 mg/kg Ca was administered intraperitoneally four times at regular 48 hour intervals. The mice were sacrificed 21 days after EAE induction and blood samples were taken from their hearts. The brains of mice were removed for histological analysis and their isolated splenocytes were cultured. Results: Our results showed that treatment with Ca caused a significant reduction in the severity of the EAE. Histological analysis indicated that there was no plaque in brain sections of Ca treated group of mice whereas 4 ± 1 plaques were detected in brain sections of controls. The density of mononuclear infiltration in the CNS of Ca treated mice was lower than in controls. The serum level of Nitric Oxide in the treatment group was lower than in the control group but was not significant. Moreover, the levels of IFN-γ in cell culture supernatant of splenocytes in treated mice were significantly lower than in the control group. Conclusion: The data indicates that Ca intervention can effectively attenuate EAE progression.

  3. A model for diagnosing and explaining multiple disorders.

    Jamieson, P W

    1991-08-01

    The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.

  4. Interstitial integrals in the multiple-scattering model

    Swanson, J.R.; Dill, D.

    1982-01-01

    We present an efficient method for the evaluation of integrals involving multiple-scattering wave functions over the interstitial region. Transformation of the multicenter interstitial wave functions to a single center representation followed by a geometric projection reduces the integrals to products of analytic angular integrals and numerical radial integrals. The projection function, which has the value 1 in the interstitial region and 0 elsewhere, has a closed-form partial-wave expansion. The method is tested by comparing its results with exact normalization and dipole integrals; the differences are 2% at worst and typically less than 1%. By providing an efficient means of calculating Coulomb integrals, the method allows treatment of electron correlations using a multiple scattering basis set

  5. Seismic PSA method for multiple nuclear power plants in a site

    Hakata, Tadakuni [Nuclear Safety Commission, Tokyo (Japan)

    2007-07-15

    The maximum number of nuclear power plants in a site is eight and about 50% of power plants are built in sites with three or more plants in the world. Such nuclear sites have potential risks of simultaneous multiple plant damages especially at external events. Seismic probabilistic safety assessment method (Level-1 PSA) for multi-unit sites with up to 9 units has been developed. The models include Fault-tree linked Monte Carlo computation, taking into consideration multivariate correlations of components and systems from partial to complete, inside and across units. The models were programmed as a computer program CORAL reef. Sample analysis and sensitivity studies were performed to verify the models and algorithms and to understand some of risk insights and risk metrics, such as site core damage frequency (CDF per site-year) for multiple reactor plants. This study will contribute to realistic state of art seismic PSA, taking consideration of multiple reactor power plants, and to enhancement of seismic safety. (author)

  6. Methods for testing transport models

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  7. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  8. A PDP model of the simultaneous perception of multiple objects

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  9. Supersymmetric U(1)' model with multiple dark matters

    Hur, Taeil; Lee, Hye-Sung; Nasri, Salah

    2008-01-01

    We consider a scenario where a supersymmetric model has multiple dark matter particles. Adding a U(1) ' gauge symmetry is a well-motivated extension of the minimal supersymmetric standard model (MSSM). It can cure the problems of the MSSM such as the μ problem or the proton decay problem with high-dimensional lepton number and baryon number violating operators which R parity allows. An extra parity (U parity) may arise as a residual discrete symmetry after U(1) ' gauge symmetry is spontaneously broken. The lightest U-parity particle (LUP) is stable under the new parity becoming a new dark matter candidate. Up to three massive particles can be stable in the presence of the R parity and the U parity. We numerically illustrate that multiple stable particles in our model can satisfy both constraints from the relic density and the direct detection, thus providing a specific scenario where a supersymmetric model has well-motivated multiple dark matters consistent with experimental constraints. The scenario provides new possibilities in the present and upcoming dark matter searches in the direct detection and collider experiments

  10. Estimation of subcriticality by neutron source multiplication method

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  11. Network modelling methods for FMRI.

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. A method of risk assessment for a multi-plant site

    White, R.F.

    1983-06-01

    A model is presented which can be used in conjunction with probabilistic risk assessment to estimate whether a site on which there are several plants (reactors or chemical plants containing radioactive materials) meets whatever risk acceptance criteria or numerical risk guidelines are applied at the time of the assessment in relation to various groups of people and for various sources of risk. The application of the multi-plant site model to the direct and inverse methods of risk assessment is described. A method is proposed by which the potential hazard rating associated with a given plant can be quantified so that an appropriate allocation can be made when assessing the risks associated with each of the plants on a site. (author)

  13. Statistical tests for equal predictive ability across multiple forecasting methods

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  14. Dynamical properties of the growing continuum using multiple-scale method

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  15. A feature point identification method for positron emission particle tracking with multiple tracers

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  16. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  17. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review.

    Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  18. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review

    Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  19. Challenges in LCA modelling of multiple loops for aluminium cans

    Niero, Monia; Olsen, Stig Irving

    considered the case of closed-loop recycling for aluminium cans, where body and lid are different alloys, and discussed the abovementioned challenge. The Life Cycle Inventory (LCI) modelling of aluminium processes is traditionally based on a pure aluminium flow, therefore neglecting the presence of alloying...... elements. We included the effect of alloying elements on the LCA modelling of aluminium can recycling. First, we performed a mass balance of the main alloying elements (Mn, Fe, Si, Cu) in aluminium can recycling at increasing levels of recycling rate. The analysis distinguished between different aluminium...... packaging scrap sources (i.e. used beverage can and mixed aluminium packaging) to understand the limiting factors for multiple loop aluminium can recycling. Secondly, we performed a comparative LCA of aluminium can production and recycling in multiple loops considering the two aluminium packaging scrap...

  20. A mixed methods study of multiple health behaviors among individuals with stroke

    Matthew Plow

    2017-05-01

    Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r =  − 0.55, BMI and limitations in activities of daily living (r =  − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r

  1. Application of multiple timestep integration method in SSC

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  2. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  3. Statistical Genetics Methods for Localizing Multiple Breast Cancer Genes

    Ott, Jurg

    1998-01-01

    .... For a number of variables measured on a trait, a method, principal components of heritability, was developed that combines these variables in such a way that the resulting linear combination has highest heritability...

  4. Dealing with Multiple Solutions in Structural Vector Autoregressive Models.

    Beltz, Adriene M; Molenaar, Peter C M

    2016-01-01

    Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.

  5. Validation and calibration of structural models that combine information from multiple sources.

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  6. Method of estimating the leakage of multiple barriers in a radioactive materials shipping package

    Towell, R.H.; Kapoor, A.; Oras, J.J.

    1997-01-01

    This paper presents the results of a theoretical study of the performance of multiple leaky barriers in containing radioactive materials in a shipping package. The methods used are reasoned analysis and finite element modeling barriers. The finite element model is developed and evaluated with parameters set to bracket 6M configurations with three to six nested plastic jars, food-pack cans, and plastic bags inside Department of Transportation (DOT) Specification 2R inner containers with pipe thread closures. The results show that nested barriers reach the regulatory limit of 1x10 -6 A 2 /hr in 11 to 52 days, even though individually the barriers would exceed the regulatory limit by a factor of as much as 370 instantaneously. These times are within normal shipping times. The finite element model is conservative because it does not consider the deposition and sticking of the leaking radioactive material on the surfaces inside each boundary

  7. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  8. Micro-macro multilevel latent class models with multiple discrete individual-level variables

    Bennink, M.; Croon, M.A.; Kroon, B.; Vermunt, J.K.

    2016-01-01

    An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which multiple discrete individual-level variables are used to explain a group-level outcome. As in the univariate case, the

  9. Compacton solutions and multiple compacton solutions for a continuum Toda lattice model

    Fan Xinghua; Tian Lixin

    2006-01-01

    Some special solutions of the Toda lattice model with a transversal degree of freedom are obtained. With the aid of Mathematica and Wu elimination method, more explicit solitary wave solutions, including compacton solutions, multiple compacton solutions, peakon solutions, as well as periodic solutions are found in this paper

  10. Estimating Ambiguity Preferences and Perceptions in Multiple Prior Models: Evidence from the Field

    S.G. Dimmock (Stephen); R.R.P. Kouwenberg (Roy); O.S. Mitchell (Olivia); K. Peijnenburg (Kim)

    2015-01-01

    markdownabstractWe develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of

  11. Inference regarding multiple structural changes in linear models with endogenous regressors

    Boldea, O.; Hall, A.R.; Han, S.

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares

  12. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  13. Automatic Generation of 3D Building Models with Multiple Roofs

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  14. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  15. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  16. Identification of Multiple-Mode Linear Models Based on Particle Swarm Optimizer with Cyclic Network Mechanism

    Tae-Hyoung Kim

    2017-01-01

    Full Text Available This paper studies the metaheuristic optimizer-based direct identification of a multiple-mode system consisting of a finite set of linear regression representations of subsystems. To this end, the concept of a multiple-mode linear regression model is first introduced, and its identification issues are established. A method for reducing the identification problem for multiple-mode models to an optimization problem is also described in detail. Then, to overcome the difficulties that arise because the formulated optimization problem is inherently ill-conditioned and nonconvex, the cyclic-network-topology-based constrained particle swarm optimizer (CNT-CPSO is introduced, and a concrete procedure for the CNT-CPSO-based identification methodology is developed. This scheme requires no prior knowledge of the mode transitions between subsystems and, unlike some conventional methods, can handle a large amount of data without difficulty during the identification process. This is one of the distinguishing features of the proposed method. The paper also considers an extension of the CNT-CPSO-based identification scheme that makes it possible to simultaneously obtain both the optimal parameters of the multiple submodels and a certain decision parameter involved in the mode transition criteria. Finally, an experimental setup using a DC motor system is established to demonstrate the practical usability of the proposed metaheuristic optimizer-based identification scheme for developing a multiple-mode linear regression model.

  17. On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?

    Gil T, M. I.; Perez C, L.; Cruz Z, E.; Furetta, C.; Roman L, J.

    2016-10-01

    In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF 2 (LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)

  18. On the thermoluminescent interactive multiple-trap system (IMTS) model: is it a simple model?

    Gil T, M. I.; Perez C, L. [UNAM, Facultad de Quimica, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Cruz Z, E.; Furetta, C.; Roman L, J., E-mail: ecruz@nucleares.unam.mx [UNAM, Instituto de Ciencias Nucleares, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico)

    2016-10-15

    In the thermally stimulated luminescence phenomenon, named thermoluminescence (Tl), the electrons and holes generated by the radiation-matter interaction can be trapped by the metastable levels in the band gap of the solid. Following, the electron can be thermally releases into the conduction band and a radiatively recombination with hole close to the recombination center occurred and the glow curve is emitted. However, the complex mechanism of trapping and thermally releases occurred in the band gap of solid. Some models, such as; first, second and general-order kinetics, have been well established to explain the behaviour of the glow curves and their defects recombination mechanism. In this work, expressions for and Interactive Multiple-Trap System model (IMTS) was obtained assuming: a set of discrete electron traps (active traps At), another set of thermally disconnected trap (TDT) and a recombination center (Rc) too. A numerical analysis based on the Levenberg-Marquardt method in conjunction with an implicit Rosenbrock method was taken into account to simulate the glow curve. The numerical method was tested through synthetic Tl glow curves for a wide range of trap parameters. The activation energy and kinetics order were determined using values from the General Order Kinetics (GOK) model as entry data to IMTS model. This model was tested using the experimental glow curves obtained from Ce or Eu-doped MgF{sub 2}(LiF) polycrystals samples. Results shown that the IMTS model can predict more accurately the behavior of the Tl glow curves that those obtained by the GOK modified by Rasheedy and by the Mixed Order Kinetics model. (Author)

  19. THE METHOD OF MULTIPLE SPATIAL PLANNING BASIC MAP

    C. Zhang

    2018-04-01

    Full Text Available The “Provincial Space Plan Pilot Program” issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve “multi-compliance” of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  20. The Method of Multiple Spatial Planning Basic Map

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  1. A mixed methods study of multiple health behaviors among individuals with stroke.

    Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene

    2017-01-01

    Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r  = 0.45), sleep disturbances and limitations in activities of daily living ( r  =  - 0.55), BMI and limitations in activities of daily living ( r  =  - 0.49), physical activity and limitations in activities of daily living ( r  = 0.41), mobility impairments and BMI ( r  =  - 0.41), sleep disturbances and physical

  2. Comparison of multiple gene assembly methods for metabolic engineering

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  3. Multiple attribute decision making model and application to food safety risk evaluation.

    Lihua Ma

    Full Text Available Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  4. Simulation of Cavity Flow by the Lattice Boltzmann Method using Multiple-Relaxation-Time scheme

    Ryu, Seung Yeob; Kang, Ha Nok; Seo, Jae Kwang; Yun, Ju Hyeon; Zee, Sung Quun

    2006-01-01

    Recently, the lattice Boltzmann method(LBM) has gained much attention for its ability to simulate fluid flows, and for its potential advantages over conventional CFD method. The key advantages of LBM are, (1) suitability for parallel computations, (2) absence of the need to solve the time-consuming Poisson equation for pressure, and (3) ease with multiphase flows, complex geometries and interfacial dynamics may be treated. The LBM using relaxation technique was introduced by Higuerea and Jimenez to overcome some drawbacks of lattice gas automata(LGA) such as large statistical noise, limited range of physical parameters, non- Galilean invariance, and implementation difficulty in three-dimensional problem. The simplest LBM is the lattice Bhatnager-Gross-Krook(LBGK) equation, which based on a single-relaxation-time(SRT) approximation. Due to its extreme simplicity, the lattice BGK(LBGK) equation has become the most popular lattice Boltzmann model in spite of its well-known deficiencies, for example, in simulating high-Reynolds numbers flow. The Multiple-Relaxation-Time(MRT) LBM was originally developed by D'Humieres. Lallemand and Luo suggests that the use of a Multiple-Relaxation-Time(MRT) models are much more stable than LBGK, because the different relaxation times can be individually tuned to achieve 'optimal' stability. A lid-driven cavity flow is selected as the test problem because it has geometrically singular points in the flow, but geometrically simple. Results are compared with those using SRT, MRT model in the LBGK method and previous simulation data using Navier-Stokes equations for the same flow conditions. In summary, LBM using MRT model introduces much less spatial oscillations near geometrical singular points, which is important for the successful simulation of higher Reynolds number flows

  5. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  6. Modelling of diffuse solar fraction with multiple predictors

    Ridley, Barbara; Boland, John [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Lauret, Philippe [Laboratoire de Physique du Batiment et des Systemes, University of La Reunion, Reunion (France)

    2010-02-15

    For some locations both global and diffuse solar radiation are measured. However, for many locations, only global radiation is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from direct on some other plane using trigonometry, we need to have diffuse radiation on the horizontal plane available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia. Boland et al. developed a validated model for Australian conditions. Boland et al. detailed our recent advances in developing the theoretical framework for the use of the logistic function instead of piecewise linear or simple nonlinear functions and was the first step in identifying the means for developing a generic model for estimating diffuse from global and other predictors. We have developed a multiple predictor model, which is much simpler than previous models, and uses hourly clearness index, daily clearness index, solar altitude, apparent solar time and a measure of persistence of global radiation level as predictors. This model performs marginally better than currently used models for locations in the Northern Hemisphere and substantially better for Southern Hemisphere locations. We suggest it can be used as a universal model. (author)

  7. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  8. Support Operators Method for the Diffusion Equation in Multiple Materials

    Winters, Andrew R. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-08-14

    A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.

  9. Computing multiple zeros using a class of quartically convergent methods

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  10. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  11. Multiple Model Particle Filtering For Multi-Target Tracking

    Hero, Alfred; Kreucher, Chris; Kastella, Keith

    2004-01-01

    .... The details of this method have been presented elsewhere 1. One feature of real targets is that they are poorly described by a single kinematic model Target behavior may change dramatically i.e...

  12. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  13. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  14. Optimized production planning model for a multi-plant cultivation system under uncertainty

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  15. Multiplicative Attribute Graph Model of Real-World Networks

    Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)

    2010-10-20

    Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.

  16. Improved exact method for the double TSP with multiple stacks

    Lusby, Richard Martin; Larsen, Jesper

    2011-01-01

    and delivery problems. The results suggest an impressive improvement, and we report, for the first time, optimal solutions to several unsolved instances from the literature containing 18 customers. Instances with 28 customers are also shown to be solvable within a few percent of optimality. © 2011 Wiley...... the first delivery, and the container cannot be repacked once packed. In this paper we improve the previously proposed exact method of Lusby et al. (Int Trans Oper Res 17 (2010), 637–652) through an additional preprocessing technique that uses the longest common subsequence between the respective pickup...

  17. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  18. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  19. Effective multiplication factor measurement by feynman-α method. 3

    Mouri, Tomoaki; Ohtani, Nobuo

    1998-06-01

    The sub-criticality monitoring system has been developed for criticality safety control in nuclear fuel handling plants. In the past experiments performed with the Deuterium Critical Assembly (DCA), it was confirmed that the detection of sub-criticality was possible to k eff = 0.3. To investigate the applicability of the method to more generalized system, experiments were performed in the light-water-moderated system of the modified DCA core. From these experiments, it was confirmed that the prompt decay constant (α), which was a index of the sub-criticality, was detected between k eff = 0.623 and k eff = 0.870 and the difference of 0.05 - 0.1Δk could be distinguished. The α values were numerically calculated with 2D transport code TWODANT and monte carlo code KENO V.a, and the results were compared with the measured values. The differences between calculated and measured values were proved to be less than 13%, which was sufficient accuracy in the sub-criticality monitoring system. It was confirmed that Feynman-α method was applicable to sub-critical measurement of the light-water-moderated system. (author)

  20. A Multiple Items EPQ/EOQ Model for a Vendor and Multiple Buyers System with Considering Continuous and Discrete Demand Simultaneously

    Jonrinaldi; Rahman, T.; Henmaidi; Wirdianto, E.; Zhang, D. Z.

    2018-03-01

    This paper proposed a mathematical model for multiple items Economic Production and Order Quantity (EPQ/EOQ) with considering continuous and discrete demand simultaneously in a system consisting of a vendor and multiple buyers. This model is used to investigate the optimal production lot size of the vendor and the number of shipments policy of orders to multiple buyers. The model considers the multiple buyers’ holding cost as well as transportation cost, which minimize the total production and inventory costs of the system. The continuous demand from any other customers can be fulfilled anytime by the vendor while the discrete demand from multiple buyers can be fulfilled by the vendor using the multiple delivery policy with a number of shipments of items in the production cycle time. A mathematical model is developed to illustrate the system based on EPQ and EOQ model. Solution procedures are proposed to solve the model using a Mixed Integer Non Linear Programming (MINLP) and algorithm methods. Then, the numerical example is provided to illustrate the system and results are discussed.

  1. Dynamic coordinated control laws in multiple agent models

    Morgan, David S.; Schwartz, Ira B.

    2005-01-01

    We present an active control scheme of a kinetic model of swarming. It has been shown previously that the global control scheme for the model, presented in [Systems Control Lett. 52 (2004) 25], gives rise to spontaneous collective organization of agents into a unified coherent swarm, via steering controls and utilizing long-range attractive and short-range repulsive interactions. We extend these results by presenting control laws whereby a single swarm is broken into independently functioning subswarm clusters. The transition between one coordinated swarm and multiple clustered subswarms is managed simply with a homotopy parameter. Additionally, we present as an alternate formulation, a local control law for the same model, which implements dynamic barrier avoidance behavior, and in which swarm coherence emerges spontaneously

  2. Laplace transform analysis of a multiplicative asset transfer model

    Sokolov, Andrey; Melatos, Andrew; Kieu, Tien

    2010-07-01

    We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.

  3. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  4. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  5. Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations

    Le, Phuong Dong; Leonard, Michael; Westra, Seth

    2018-03-01

    Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.

  6. Many-electron model for multiple ionization in atomic collisions

    Archubi, C D; Montanari, C C; Miraglia, J E

    2007-01-01

    We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data

  7. Many-electron model for multiple ionization in atomic collisions

    Archubi, C D [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Montanari, C C [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Miraglia, J E [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina)

    2007-03-14

    We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data.

  8. Model selection in Bayesian segmentation of multiple DNA alignments.

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  9. Multiple HEPA filter test methods, January--December 1976

    Schuster, B.; Kyle, T.; Osetek, D.

    1977-06-01

    The testing of tandem high-efficiency particulate air (HEPA) filter systems is of prime importance for the measurement of accurate overall system protection factors. A procedure, based on the use of an intra-cavity laser particle spectrometer, has been developed for measuring protection factors in the 10 8 range. A laboratory scale model of a filter system was constructed and initially tested to determine individual HEPA filter characteristics with regard to size and state (liquid or solid) of several test aerosols. Based on these laboratory measurements, in-situ testing has been successfully conducted on a number of single and tandem filter installations within the Los Alamos Scientific Laboratory as well as on extraordinary large single systems at Rocky Flats. For the purpose of recovery and for simplified solid waste disposal, or prefiltering purposes, two versions of an inhomogeneous electric field air cleaner have been devised and are undergoing testing. Initial experience with one of the systems, which relies on an electrostatic spraying phenomenon, indicates performance efficiency of greater than 99.9% for flow velocities commonly used in air cleaning systems. Among the effluents associated with nuclear fuel reprocessing is 129 I. An intra-cavity laser detection system is under development which shows promise of being able to detect mixing ratios of one part in 10 7 , I 2 in air

  10. A multiple-location model for natural gas forward curves

    Buffington, J.C.

    1999-06-01

    This thesis presents an approach for financial modelling of natural gas in which connections between locations are incorporated and the complexities of forward curves in natural gas are considered. Apart from electricity, natural gas is the most volatile commodity traded. Its price is often dependent on the weather and price shocks can be felt across several geographic locations. This modelling approach incorporates multiple risk factors that correspond to various locations. One of the objectives was to determine if the model could be used for closed-form option prices. It was suggested that an adequate model for natural gas must consider 3 statistical properties: volatility term structure, backwardation and contango, and stochastic basis. Data from gas forward prices at Chicago, NYMEX and AECO were empirically tested to better understand these 3 statistical properties at each location and to verify if the proposed model truly incorporates these properties. In addition, this study examined the time series property of the difference of two locations (the basis) and determines that these empirical properties are consistent with the model properties. Closed-form option solutions were also developed for call options of forward contracts and call options on forward basis. The options were calibrated and compared to other models. The proposed model is capable of pricing options, but the prices derived did not pass the test of economic reasonableness. However, the model was able to capture the effect of transportation as well as aspects of seasonality which is a benefit over other existing models. It was determined that modifications will be needed regarding the estimation of the convenience yields. 57 refs., 2 tabs., 7 figs., 1 append

  11. Multiplicative point process as a model of trading activity

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  12. Guideline validation in multiple trauma care through business process modeling.

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  13. Curvelet-domain multiple matching method combined with cubic B-spline function

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  14. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  15. Numerical Computation of Underground Inundation in Multiple Layers Using the Adaptive Transfer Method

    Hyung-Jun Kim

    2018-01-01

    Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.

  16. Resveratrol Neuroprotection in a Chronic Mouse Model of Multiple Sclerosis

    Zoe eFonseca-Kelly

    2012-05-01

    Full Text Available Resveratrol is a naturally-occurring polyphenol that activates SIRT1, an NAD-dependent deacetylase. SRT501, a pharmaceutical formulation of resveratrol with enhanced systemic absorption, prevents neuronal loss without suppressing inflammation in mice with relapsing experimental autoimmune encephalomyelitis (EAE, a model of multiple sclerosis. In contrast, resveratrol has been reported to suppress inflammation in chronic EAE, although neuroprotective effects were not evaluated. The current studies examine potential neuroprotective and immunomodulatory effects of resveratrol in chronic EAE induced by immunization with myelin oligodendroglial glycoprotein peptide in C57/Bl6 mice. Effects of two distinct formulations of resveratrol administered daily orally were compared. Resveratrol delayed the onset of EAE compared to vehicle-treated EAE mice, but did not prevent or alter the phenotype of inflammation in spinal cords or optic nerves. Significant neuroprotective effects were observed, with higher numbers of retinal ganglion cells found in eyes of resveratrol-treated EAE mice with optic nerve inflammation. Results demonstrate that resveratrol prevents neuronal loss in this chronic demyelinating disease model, similar to its effects in relapsing EAE. Differences in immunosuppression compared with prior studies suggest that immunomodulatory effects may be limited and may depend on specific immunization parameters or timing of treatment. Importantly, neuroprotective effects can occur without immunosuppression, suggesting a potential additive benefit of resveratrol in combination with anti-inflammatory therapies for multiple sclerosis.

  17. A Multiple Indicators Multiple Causes (MIMIC) model of internal barriers to drug treatment in China.

    Qi, Chang; Kelly, Brian C; Liao, Yanhui; He, Haoyu; Luo, Tao; Deng, Huiqiong; Liu, Tieqiao; Hao, Wei; Wang, Jichuan

    2015-03-01

    Although evidence exists for distinct barriers to drug abuse treatment (BDATs), investigations of their inter-relationships and the effect of individual characteristics on the barrier factors have been sparse, especially in China. A Multiple Indicators Multiple Causes (MIMIC) model is applied for this target. A sample of 262 drug users were recruited from three drug rehabilitation centers in Hunan Province, China. We applied a MIMIC approach to investigate the effect of gender, age, marital status, education, primary substance use, duration of primary drug use, and drug treatment experience on the internal barrier factors: absence of problem (AP), negative social support (NSS), fear of treatment (FT), and privacy concerns (PC). Drug users of various characteristics were found to report different internal barrier factors. Younger participants were more likely to report NSS (-0.19, p=0.038) and PC (-0.31, p<0.001). Compared to other drug users, ice users were more likely to report AP (0.44, p<0.001) and NSS (0.25, p=0.010). Drug treatment experiences related to AP (0.20, p=0.012). In addition, differential item functioning (DIF) occurred in three items when participant from groups with different duration of drug use, ice use, or marital status. Individual characteristics had significant effects on internal barriers to drug treatment. On this basis, BDAT perceived by different individuals could be assessed before tactics were utilized to successfully remove perceived barriers to drug treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  19. Optimal Retail Price Model for Partial Consignment to Multiple Retailers

    Po-Yu Chen

    2017-01-01

    Full Text Available This paper investigates the product pricing decision-making problem under a consignment stock policy in a two-level supply chain composed of one supplier and multiple retailers. The effects of the supplier’s wholesale prices and its partial inventory cost absorption of the retail prices of retailers with different market shares are investigated. In the partial product consignment model this paper proposes, the seller and the retailers each absorb part of the inventory costs. This model also provides general solutions for the complete product consignment and the traditional policy that adopts no product consignment. In other words, both the complete consignment and nonconsignment models are extensions of the proposed model (i.e., special cases. Research results indicated that the optimal retail price must be between 1/2 (50% and 2/3 (66.67% times the upper limit of the gross profit. This study also explored the results and influence of parameter variations on optimal retail price in the model.

  20. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  1. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  2. Analytical methods used at model facility

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  3. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  4. Discrete modeling of multiple discontinuities in rock mass using XFEM

    Das, Kamal C.; Ausas, Roberto Federico; Carol, Ignacio; Rodrigues, Eduardo; Sandeep, Sandra; Vargas, P. E.; Gonzalez, Nubia Aurora; Segura, Josep María; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2017-01-01

    Modeling of discontinuities (fractures and fault surfaces) is of major importance to assess the geomechanical behavior of oil and gas reservoirs, especially for tight and unconventional reservoirs. Numerical analysis of discrete discontinuities traditionally has been studied using interface element concepts, however more recently there are attempts to use extended finite element method (XFEM). The development of an XFEM tool for geo-mechanical fractures/faults modeling has significant industr...

  5. Protein structure modeling for CASP10 by multiple layers of global optimization.

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  6. Hybrid models for the simulation of microstructural evolution influenced by coupled, multiple physical processes

    Tikare, Veena [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez-Rivera, Efrain [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Madison, Jonathan D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Holm, Elizabeth Ann [Carnegie Mellon Univ., Pittsburgh, PA (United States); Patterson, Burton R. [Univ. of Florida, Gainesville, FL (United States). Dept. of Materials Science and Engineering; Homer, Eric R. [Brigham Young Univ., Provo, UT (United States). Dept. of Mechanical Engineering

    2013-09-01

    Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.

  7. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System.

    Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou

    2017-03-22

    The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras' coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit.

  8. The System of Inventory Forecasting in PT. XYZ by using the Method of Holt Winter Multiplicative

    Shaleh, W.; Rasim; Wahyudin

    2018-01-01

    Problems at PT. XYZ currently only rely on manual bookkeeping, then the cost of production will swell and all investments invested to be less to predict sales and inventory of goods. If the inventory prediction of goods is to large, then the cost of production will swell and all investments invested to be less efficient. Vice versa, if the inventory prediction is too small it will impact on consumers, so that consumers are forced to wait for the desired product. Therefore, in this era of globalization, the development of computer technology has become a very important part in every business plan. Almost of all companies, both large and small, use computer technology. By utilizing computer technology, people can make time in solving complex business problems. Computer technology for companies has become an indispensable activity to provide enhancements to the business services they manage but systems and technologies are not limited to the distribution model and data processing but the existing system must be able to analyze the possibilities of future company capabilities. Therefore, the company must be able to forecast conditions and circumstances, either from inventory of goods, force, or profits to be obtained. To forecast it, the data of total sales from December 2014 to December 2016 will be calculated by using the method of Holt Winters, which is the method of time series prediction (Multiplicative Seasonal Method) it is seasonal data that has increased and decreased, also has 4 equations i.e. Single Smoothing, Trending Smoothing, Seasonal Smoothing and Forecasting. From the results of research conducted, error value in the form of MAPE is below 1%, so it can be concluded that forecasting with the method of Holt Winter Multiplicative.

  9. A latent class multiple constraint multiple discrete-continuous extreme value model of time use and goods consumption.

    2016-06-01

    This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...

  10. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  11. MODEL PENSKORAN PARTIAL CREDIT PADA BUTIR MULTIPLE TRUE-FALSE BIDANG FISIKA

    Wasis Wasis

    2013-01-01

    in 2007 of Faculty of Mathematics and Science of Surabaya State University. The testees’ responses were scored using the partial credit model (PCM I; II; and III and also dichotomously scored. The results of the four scoring models were analyzed using the Quest program to obtain the estimation of the butir difficulty level (δ and that of the testees’ abilities (θ. The generating of the simulation data used the SAS statistical program and the estimation accuracy was analyzed by using the root mean squared error (RMSE method. The results of the study show the following: (i The scoring with the partial credit model with weighting is capable of estimating abilities more accurate than without weighting and dichotomous scoring; (ii The more the number of the categories in the partial credit scoring is, the more accurate the result of the ability estimation. Keywords: partial credit model scoring, multiple true-false butir

  12. Analysis on trust influencing factors and trust model from multiple perspectives of online Auction

    Yu, Wang

    2017-10-01

    Current reputation models lack the research on online auction trading completely so they cannot entirely reflect the reputation status of users and may cause problems on operability. To evaluate the user trust in online auction correctly, a trust computing model based on multiple influencing factors is established. It aims at overcoming the efficiency of current trust computing methods and the limitations of traditional theoretical trust models. The improved model comprehensively considers the trust degree evaluation factors of three types of participants according to different participation modes of online auctioneers, to improve the accuracy, effectiveness and robustness of the trust degree. The experiments test the efficiency and the performance of our model under different scale of malicious user, under environment like eBay and Sporas model. The experimental results analysis show the model proposed in this paper makes up the deficiency of existing model and it also has better feasibility.

  13. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  14. Using hidden Markov models to align multiple sequences.

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  15. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  16. An engineering method to estimate the junction temperatures of light-emitting diodes in multiple LED application

    Fu, Xing; Hu, Run; Luo, Xiaobing

    2014-01-01

    Acquiring the junction temperature of light emitting diode (LED) is essential for performance evaluation. But it is hard to get in the multiple LED applications. In this paper, an engineering method is presented to estimate the junction temperatures of LEDs in multiple LED applications. This method is mainly based on an analytical model, and it can be easily applied with some simple measurements. Simulations and experiments were conducted to prove the feasibility of the method, and the deviations among the results obtained by the present method with those by simulation as well as experiments are less than 2% and 3%, respectively. In the final part of this study, the engineering method was used to analyze the thermal resistances of a street lamp. The material of lead frame was found to affect the system thermal resistance mostly, and the choice of solder material strongly depended on the material of the lead frame.

  17. A Multiple Case Study Research on Building a Business Model Structure:A Grounded Theory Method%基于跨案例扎根分析的商业模式结构模型研究

    郑称德; 许爱林; 赵佳英

    2011-01-01

    The purpose of this paper is to develop a business model structure through coding 75 business model innovating cases by grounded theory approach which proposed by Strauss. The process of conducing this work goes through three stages. On stage of open coding, an iterative procedure is used to name, conceptualize and catagorize the varibles that describe business models in cases in order to formulate those catorigies, properities and dimensions that are related to business models. On stage of axial coding, according to their properities, those categories are distributed to six causal levels of paradigm model. On stage 3, the properties in each level are clustered into different components of business model. Eventually, a two-dimensional business modelstructure consisting of six levels and fifteen components has been established and its application has been illustrated through applying the structure to analyze the business model of a real company in detail. The results show that the business model structure not only improves the weakness of the existing model study on component detailing, construct comprehensiveness and model openness effectively, but also pertains to explanatory model and formal theory. Thus, the structure can be applied to business model description, assessment and the summarization of the model's innovation strategies across contexts. Specially, the structure can serve as a solid foundation to facilitate the probing of complicated issues relevant to business model, as well as a guideline for entrepreneur innovation and business model improvement.%以75个实际企业商业模式成功案例为样本,应用扎根理论的Strauss三阶段编码方法进行跨案例分析,构建商业模式结构模型.以迭代式开放性编码对案例中描述商业模式的变量进行概念化和类属化,依据主轴编码的典范模式将各类属按性质分布于具有因果关系的6个层次,将每个层次的类属性质聚类为具有三级结构的商

  18. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  19. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  20. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  1. Acoustic 3D modeling by the method of integral equations

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  2. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  3. A study of single multiplicative neuron model with nonlinear filters for hourly wind speed prediction

    Wu, Xuedong; Zhu, Zhiyu; Su, Xunliang; Fan, Shaosheng; Du, Zhaoping; Chang, Yanchao; Zeng, Qingjun

    2015-01-01

    Wind speed prediction is one important methods to guarantee the wind energy integrated into the whole power system smoothly. However, wind power has a non–schedulable nature due to the strong stochastic nature and dynamic uncertainty nature of wind speed. Therefore, wind speed prediction is an indispensable requirement for power system operators. Two new approaches for hourly wind speed prediction are developed in this study by integrating the single multiplicative neuron model and the iterated nonlinear filters for updating the wind speed sequence accurately. In the presented methods, a nonlinear state–space model is first formed based on the single multiplicative neuron model and then the iterated nonlinear filters are employed to perform dynamic state estimation on wind speed sequence with stochastic uncertainty. The suggested approaches are demonstrated using three cases wind speed data and are compared with autoregressive moving average, artificial neural network, kernel ridge regression based residual active learning and single multiplicative neuron model methods. Three types of prediction errors, mean absolute error improvement ratio and running time are employed for different models’ performance comparison. Comparison results from Tables 1–3 indicate that the presented strategies have much better performance for hourly wind speed prediction than other technologies. - Highlights: • Developed two novel hybrid modeling methods for hourly wind speed prediction. • Uncertainty and fluctuations of wind speed can be better explained by novel methods. • Proposed strategies have online adaptive learning ability. • Proposed approaches have shown better performance compared with existed approaches. • Comparison and analysis of two proposed novel models for three cases are provided

  4. Interaction of multiple biomimetic antimicrobial polymers with model bacterial membranes

    Baul, Upayan, E-mail: upayanb@imsc.res.in; Vemparala, Satyavani, E-mail: vani@imsc.res.in [The Institute of Mathematical Sciences, C.I.T. Campus, Taramani, Chennai 600113 (India); Kuroda, Kenichi, E-mail: kkuroda@umich.edu [Department of Biologic and Materials Sciences, University of Michigan School of Dentistry, Ann Arbor, Michigan 48109 (United States)

    2014-08-28

    Using atomistic molecular dynamics simulations, interaction of multiple synthetic random copolymers based on methacrylates on prototypical bacterial membranes is investigated. The simulations show that the cationic polymers form a micellar aggregate in water phase and the aggregate, when interacting with the bacterial membrane, induces clustering of oppositely charged anionic lipid molecules to form clusters and enhances ordering of lipid chains. The model bacterial membrane, consequently, develops lateral inhomogeneity in membrane thickness profile compared to polymer-free system. The individual polymers in the aggregate are released into the bacterial membrane in a phased manner and the simulations suggest that the most probable location of the partitioned polymers is near the 1-palmitoyl-2-oleoyl-phosphatidylglycerol (POPG) clusters. The partitioned polymers preferentially adopt facially amphiphilic conformations at lipid-water interface, despite lacking intrinsic secondary structures such as α-helix or β-sheet found in naturally occurring antimicrobial peptides.

  5. Energy models: methods and trends

    Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)

    1997-12-31

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.

  6. Energy models: methods and trends

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  7. A Fractional Supervision Game Model of Multiple Stakeholders and Numerical Simulation

    Rongwu Lu

    2017-01-01

    Full Text Available Considering the popular use of a certain kind of supervision management problem in many fields, we firstly build an ordinary supervision game model of multiple stakeholders. Secondly, a fractional supervision game model is set up and solved based on the theory of fractional calculus and a predictor-corrector numerical approach. Thirdly, the methods of phase diagram and time series graph were applied to simulate and analyse the dynamic process of the fractional order game model. Results of numerical solutions are given to illustrate our conclusions and referred to the practice.

  8. Multi-Frame Rate Based Multiple-Model Training for Robust Speaker Identification of Disguised Voice

    Prasad, Swati; Tan, Zheng-Hua; Prasad, Ramjee

    2013-01-01

    Speaker identification systems are prone to attack when voice disguise is adopted by the user. To address this issue,our paper studies the effect of using different frame rates on the accuracy of the speaker identification system for disguised voice.In addition, a multi-frame rate based multiple......-model training method is proposed. The experimental results show the superior performance of the proposed method compared to the commonly used single frame rate method for three types of disguised voice taken from the CHAINS corpus....

  9. The intergenerational multiple deficit model and the case of dyslexia

    Elsje evan Bergen

    2014-06-01

    Full Text Available Which children go on to develop dyslexia? Since dyslexia has a multifactorial aetiology, this question can be restated as: What are the factors that put children at high risk for developing dyslexia? It is argued that a useful theoretical framework to address this question is Pennington’s (2006 multiple deficit model (MDM. This model replaces models that attribute dyslexia to a single underlying cause. Subsequently, the generalist genes hypothesis for learning (disabilities (Plomin & Kovas, 2005 is described and integrated with the MDM. Finally, findings are presented from a longitudinal study with children at family risk for dyslexia. Such studies can contribute to testing and specifying the MDM. In this study, risk factors at both the child and family level were investigated. This led to the proposed intergenerational MDM, in which both parents confer liability via intertwined genetic and environmental pathways. Future scientific directions are discussed to investigate parent-offspring resemblance and transmission patterns, which will shed new light on disorder aetiology.

  10. An Advanced N -body Model for Interacting Multiple Stellar Systems

    Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)

    2017-06-01

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).

  11. A diagnostic tree model for polytomous responses with multiple strategies.

    Ma, Wenchao

    2018-04-23

    Constructed-response items have been shown to be appropriate for cognitively diagnostic assessments because students' problem-solving procedures can be observed, providing direct evidence for making inferences about their proficiency. However, multiple strategies used by students make item scoring and psychometric analyses challenging. This study introduces the so-called two-digit scoring scheme into diagnostic assessments to record both students' partial credits and their strategies. This study also proposes a diagnostic tree model (DTM) by integrating the cognitive diagnosis models with the tree model to analyse the items scored using the two-digit rubrics. Both convergent and divergent tree structures are considered to accommodate various scoring rules. The MMLE/EM algorithm is used for item parameter estimation of the DTM, and has been shown to provide good parameter recovery under varied conditions in a simulation study. A set of data from TIMSS 2007 mathematics assessment is analysed to illustrate the use of the two-digit scoring scheme and the DTM. © 2018 The British Psychological Society.

  12. A minimal model for multiple epidemics and immunity spreading.

    Kim Sneppen

    Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  13. A minimal model for multiple epidemics and immunity spreading.

    Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan

    2010-10-18

    Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  14. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  15. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  16. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  17. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  18. Multiple imputation to account for measurement error in marginal structural models

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  19. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Tóth, Anna; Fodor, Katalin; Praznovszky, Tünde; Tubak, Vilmos; Udvardy, Andor; Hadlaczky, Gyula; Katona, Robert L

    2014-01-01

    Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs) was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS) cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  20. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Anna Tóth

    Full Text Available Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  1. Candidate Prediction Models and Methods

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  2. Localization of a small change in a multiple scattering environment without modeling of the actual medium

    Rakotonarivo , Sandrine; Walker , S.C.; Kuperman , W. A.; Roux , Philippe

    2011-01-01

    International audience; A method to actively localize a small perturbation in a multiple scattering medium using a collection of remote acoustic sensors is presented. The approach requires only minimal modeling and no knowledge of the scatterer distribution and properties of the scattering medium and the perturbation. The medium is ensonified before and after a perturbation is introduced. The coherent difference between the measured signals then reveals all field components that have interact...

  3. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods

    Adi Syahputra

    2014-03-01

    Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.

  4. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models.

    Preacher, Kristopher J; Hayes, Andrew F

    2008-08-01

    Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.

  5. Knowledge representation to support reasoning based on multiple models

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  6. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  7. Innovative supply chain optimization models with multiple uncertainty factors

    Choi, Tsan Ming; Govindan, Kannan; Li, Xiang

    2017-01-01

    Uncertainty is an inherent factor that affects all dimensions of supply chain activities. In today’s business environment, initiatives to deal with one specific type of uncertainty might not be effective since other types of uncertainty factors and disruptions may be present. These factors relate...... to supply chain competition and coordination. Thus, to achieve a more efficient and effective supply chain requires the deployment of innovative optimization models and novel methods. This preface provides a concise review of critical research issues regarding innovative supply chain optimization models...

  8. Shared mental models of integrated care: aligning multiple stakeholder perspectives.

    Evans, Jenna M; Baker, G Ross

    2012-01-01

    Health service organizations and professionals are under increasing pressure to work together to deliver integrated patient care. A common understanding of integration strategies may facilitate the delivery of integrated care across inter-organizational and inter-professional boundaries. This paper aims to build a framework for exploring and potentially aligning multiple stakeholder perspectives of systems integration. The authors draw from the literature on shared mental models, strategic management and change, framing, stakeholder management, and systems theory to develop a new construct, Mental Models of Integrated Care (MMIC), which consists of three types of mental models, i.e. integration-task, system-role, and integration-belief. The MMIC construct encompasses many of the known barriers and enablers to integrating care while also providing a comprehensive, theory-based framework of psychological factors that may influence inter-organizational and inter-professional relations. While the existing literature on integration focuses on optimizing structures and processes, the MMIC construct emphasizes the convergence and divergence of stakeholders' knowledge and beliefs, and how these underlying cognitions influence interactions (or lack thereof) across the continuum of care. MMIC may help to: explain what differentiates effective from ineffective integration initiatives; determine system readiness to integrate; diagnose integration problems; and develop interventions for enhancing integrative processes and ultimately the delivery of integrated care. Global interest and ongoing challenges in integrating care underline the need for research on the mental models that characterize the behaviors of actors within health systems; the proposed framework offers a starting point for applying a cognitive perspective to health systems integration.

  9. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  10. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  11. Eye Movement Abnormalities in Multiple Sclerosis: Pathogenesis, Modeling, and Treatment

    Alessandro Serra

    2018-02-01

    Full Text Available Multiple sclerosis (MS commonly causes eye movement abnormalities that may have a significant impact on patients’ disability. Inflammatory demyelinating lesions, especially occurring in the posterior fossa, result in a wide range of disorders, spanning from acquired pendular nystagmus (APN to internuclear ophthalmoplegia (INO, among the most common. As the control of eye movements is well understood in terms of anatomical substrate and underlying physiological network, studying ocular motor abnormalities in MS provides a unique opportunity to gain insights into mechanisms of disease. Quantitative measurement and modeling of eye movement disorders, such as INO, may lead to a better understanding of common symptoms encountered in MS, such as Uhthoff’s phenomenon and fatigue. In turn, the pathophysiology of a range of eye movement abnormalities, such as APN, has been clarified based on correlation of experimental model with lesion localization by neuroimaging in MS. Eye movement disorders have the potential of being utilized as structural and functional biomarkers of early cognitive deficit, and possibly help in assessing disease status and progression, and to serve as platform and functional outcome to test novel therapeutic agents for MS. Knowledge of neuropharmacology applied to eye movement dysfunction has guided testing and use of a number of pharmacological agents to treat some eye movement disorders found in MS, such as APN and other forms of central nystagmus.

  12. 3D Face modeling using the multi-deformable method.

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  13. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  14. Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis

    Pomann, Gina-Maria

    image regression techniques have been shown to have modest performance for assessing the integrity of the blood-brain barrier based on imaging without contrast agents. These models have centered on the problem of cross-sectional classification in which patients are imaged at a single study visit and pre-contrast images are used to predict post-contrast imaging. In this paper, we extend these methods to incorporate historical imaging information, and we find the proposed model to exhibit improved performance. We further develop scan-stratified case-control sampling techniques that reduce the computational burden of local image regression models while respecting the low proportion of the brain that exhibits abnormal vascular permeability. In the third part of this thesis, we present methods to evaluate tissue damage in patients with MS. We propose a lag functional linear model to predict a functional response using multiple functional predictors observed at discrete grids with noise. Two procedures are proposed to estimate the regression parameter functions; 1) a semi-local smoothing approach using generalized cross-validation; and 2) a global smoothing approach using a restricted maximum likelihood framework. Numerical studies are presented to analyze predictive accuracy in many realistic scenarios. We find that the global smoothing approach results in higher predictive accuracy than the semi-local approach. The methods are employed to estimate a measure of tissue damage in patients with MS. In patients with MS, the myelin sheaths around the axons of the neurons in the brain and spinal cord are damaged. The model facilitates the use of commonly acquired imaging modalities to estimate a measure of tissue damage within lesions. The proposed model outperforms the cross-sectional models that do not account for temporal patterns of lesional development and repair.

  15. Model-based monitoring of rotors with multiple coexisting faults

    Rossner, Markus

    2015-01-01

    Monitoring systems are applied to many rotors, but only few monitoring systems can separate coexisting errors and identify their quantity. This research project solves this problem using a combination of signal-based and model-based monitoring. The signal-based part performs a pre-selection of possible errors; these errors are further separated with model-based methods. This approach is demonstrated for the errors unbalance, bow, stator-fixed misalignment, rotor-fixed misalignment and roundness errors. For the model-based part, unambiguous error definitions and models are set up. The Ritz approach reduces the model order and therefore speeds up the diagnosis. Identification algorithms are developed for the different rotor faults. Hereto, reliable damage indicators and proper sub steps of the diagnosis have to be defined. For several monitoring problems, measuring both deflection and bearing force is very useful. The monitoring system is verified by experiments on an academic rotor test rig. The interpretation of the measurements requires much knowledge concerning the dynamics of the rotor. Due to the model-based approach, the system can separate errors with similar signal patterns and identify bow and roundness error online at operation speed. [de

  16. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  17. Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams

    Lipatov, Alexander S.

    2011-01-01

    We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.

  18. Fuzzy linear model for production optimization of mining systems with multiple entities

    Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar

    2011-12-01

    Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.

  19. A multiple-compartment model for biokinetics studies in plants

    Garcia, Fermin; Pietrobron, Flavio; Fonseca, Agnes M.F.; Mol, Anderson W.; Rodriguez, Oscar; Guzman, Fernando

    2001-01-01

    In the present work is used the system of linear equations based in the general Assimakopoulos's GMCM model , for the development of a new method that will determine the flow's parameters and transfer coefficients in plants. The need of mathematical models to quantify the penetration of a trace substance in animals and plants, has often been stressed in the literature. Usually, in radiological environment studies, it is used the mean value of contaminant concentrations on whole or edible part plant body, without taking in account vegetable physiology regularities. In this work concepts and mathematical formulation of a Vegetable Multi-compartment Model (VMCM), taking into account the plant's physiology regularities is presented. The model based in general ideas of the GMCM , and statistical Square Minimum Method STATFLUX is proposed to use in inverse sense: the experimental time dependence of concentration in each compartment, should be input, and the parameters should be determined from this data in a statistical approach. The case of Uranium metabolism is discussed. (author)

  20. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  1. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  2. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  3. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  4. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  5. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  6. A method for the generation of random multiple Coulomb scattering angles

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  7. Investigating lithological and geophysical relationships with applications to geological uncertainty analysis using Multiple-Point Statistical methods

    Barfod, Adrian

    The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...

  8. Multivariate Multiple Regression Models for a Big Data-Empowered SON Framework in Mobile Wireless Networks

    Yoonsu Shin

    2016-01-01

    Full Text Available In the 5G era, the operational cost of mobile wireless networks will significantly increase. Further, massive network capacity and zero latency will be needed because everything will be connected to mobile networks. Thus, self-organizing networks (SON are needed, which expedite automatic operation of mobile wireless networks, but have challenges to satisfy the 5G requirements. Therefore, researchers have proposed a framework to empower SON using big data. The recent framework of a big data-empowered SON analyzes the relationship between key performance indicators (KPIs and related network parameters (NPs using machine-learning tools, and it develops regression models using a Gaussian process with those parameters. The problem, however, is that the methods of finding the NPs related to the KPIs differ individually. Moreover, the Gaussian process regression model cannot determine the relationship between a KPI and its various related NPs. In this paper, to solve these problems, we proposed multivariate multiple regression models to determine the relationship between various KPIs and NPs. If we assume one KPI and multiple NPs as one set, the proposed models help us process multiple sets at one time. Also, we can find out whether some KPIs are conflicting or not. We implement the proposed models using MapReduce.

  9. Mathematical model quantifies multiple daylight exposure and burial events for rock surfaces using luminescence dating

    Freiesleben, Trine; Sohbati, Reza; Murray, Andrew; Jain, Mayank; Al Khasawneh, Sahar; Hvidt, Søren; Jakobsen, Bo

    2015-01-01

    Interest in the optically stimulated luminescence (OSL) dating of rock surfaces has increased significantly over the last few years, as the potential of the method has been explored. It has been realized that luminescence-depth profiles show qualitative evidence for multiple daylight exposure and burial events. To quantify both burial and exposure events a new mathematical model is developed by expanding the existing models of evolution of luminescence–depth profiles, to include repeated sequential events of burial and exposure to daylight. This new model is applied to an infrared stimulated luminescence-depth profile from a feldspar-rich granite cobble from an archaeological site near Aarhus, Denmark. This profile shows qualitative evidence for multiple daylight exposure and burial events; these are quantified using the model developed here. By determining the burial ages from the surface layer of the cobble and by fitting the new model to the luminescence profile, it is concluded that the cobble was well bleached before burial. This indicates that the OSL burial age is likely to be reliable. In addition, a recent known exposure event provides an approximate calibration for older daylight exposure events. This study confirms the suggestion that rock surfaces contain a record of exposure and burial history, and that these events can be quantified. The burial age of rock surfaces can thus be dated with confidence, based on a knowledge of their pre-burial light exposure; it may also be possible to determine the length of a fossil exposure, using a known natural light exposure as calibration. - Highlights: • Evidence for multiple exposure and burial events in the history of a single cobble. • OSL rock surface dating model improved to include multiple burial/exposure cycles. • Application of the new model quantifies burial and exposure events.

  10. Regularization methods for ill-posed problems in multiple Hilbert scales

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  11. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  12. Mathematical Modeling of Loop Heat Pipes with Multiple Capillary Pumps and Multiple Condensers. Part 1; Stead State Stimulations

    Hoang, Triem T.; OConnell, Tamara; Ku, Jentung

    2004-01-01

    Loop Heat Pipes (LHPs) have proven themselves as reliable and robust heat transport devices for spacecraft thermal control systems. So far, the LHPs in earth-orbit satellites perform very well as expected. Conventional LHPs usually consist of a single capillary pump for heat acquisition and a single condenser for heat rejection. Multiple pump/multiple condenser LHPs have shown to function very well in ground testing. Nevertheless, the test results of a dual pump/condenser LHP also revealed that the dual LHP behaved in a complicated manner due to the interaction between the pumps and condensers. Thus it is redundant to say that more research is needed before they are ready for 0-g deployment. One research area that perhaps compels immediate attention is the analytical modeling of LHPs, particularly the transient phenomena. Modeling a single pump/single condenser LHP is difficult enough. Only a handful of computer codes are available for both steady state and transient simulations of conventional LHPs. No previous effort was made to develop an analytical model (or even a complete theory) to predict the operational behavior of the multiple pump/multiple condenser LHP systems. The current research project offered a basic theory of the multiple pump/multiple condenser LHP operation. From it, a computer code was developed to predict the LHP saturation temperature in accordance with the system operating and environmental conditions.

  13. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    (TPROGS) of alternating geological facies. The second method, multiple-point statistics, uses training images to estimate the conditional probability of sand-lenses at a certain location. Both methods respect field observations such as local stratigraphy, however, only the multiple-point statistics can...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...

  14. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.

    2011-01-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models

  15. Simple model for multiple-choice collective decision making.

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  16. Assessment of hydrogen fuel cell applications using fuzzy multiple-criteria decision making method

    Chang, Pao-Long; Hsu, Chiung-Wen; Lin, Chiu-Yue

    2012-01-01

    Highlights: ► This study uses the fuzzy MCDM method to assess hydrogen fuel cell applications. ► We evaluate seven different hydrogen fuel cell applications based on 14 criteria. ► Results show that fuel cell backup power systems should be chosen for development in Taiwan. -- Abstract: Assessment is an essential process in framing government policy. It is critical to select the appropriate targets to meet the needs of national development. This study aimed to develop an assessment model for evaluating hydrogen fuel cell applications and thus provide a screening tool for decision makers. This model operates by selecting evaluation criteria, determining criteria weights, and assessing the performance of hydrogen fuel cell applications for each criterion. The fuzzy multiple-criteria decision making method was used to select the criteria and the preferred hydrogen fuel cell products based on information collected from a group of experts. Survey questionnaires were distributed to collect opinions from experts in different fields. After the survey, the criteria weights and a ranking of alternatives were obtained. The study first defined the evaluation criteria in terms of the stakeholders, so that comprehensive influence criteria could be identified. These criteria were then classified as environmental, technological, economic, or social to indicate the purpose of each criterion in the assessment process. The selected criteria included 14 indicators, such as energy efficiency and CO 2 emissions, as well as seven hydrogen fuel cell applications, such as forklifts and backup power systems. The results show that fuel cell backup power systems rank the highest, followed by household fuel cell electric-heat composite systems. The model provides a screening tool for decision makers to select hydrogen-related applications.

  17. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  18. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  19. MULTIPLE HUMAN TRACKING IN COMPLEX SITUATION BY DATA ASSIMILATION WITH PEDESTRIAN BEHAVIOR MODEL

    W. Nakanishi

    2012-07-01

    Full Text Available A new method of multiple human tracking is proposed. The key concept is that to assume a tracking process as a data assimilation process. Despite the importance of understanding pedestrian behavior in public space with regard to achieving more sophisticated space design and flow control, automatic human tracking in complex situation is still challenging when people move close to each other or are occluded by others. For this difficulty, we stochastically combine existing tracking method by image processing with simulation models of walking behavior. We describe a system in a form of general state space model and define the components of the model according to the review on related works. Then we apply the proposed method to the data acquired at the ticket gate of the railway station. We show the high performance of the method, as well as compare the result with other model to present the advantage of integrating the behavior model to the tracking method. We also show the method's ability to acquire passenger flow information such as ticket gate choice and OD data automatically from the tracking result.

  20. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  1. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  2. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  3. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  4. Stabilization of multiple rib fractures in a canine model.

    Huang, Ke-Nan; Xu, Zhi-Fei; Sun, Ju-Xian; Ding, Xin-Yu; Wu, Bin; Li, Wei; Qin, Xiong; Tang, Hua

    2014-12-01

    Operative stabilization is frequently used in the clinical treatment of multiple rib fractures (MRF); however, no ideal material exists for use in this fixation. This study investigates a newly developed biodegradable plate system for the stabilization of MRF. Silk fiber-reinforced polycaprolactone (SF/PCL) plates were developed for rib fracture stabilization and studied using a canine flail chest model. Adult mongrel dogs were divided into three groups: one group received the SF/PCL plates, one group received standard clinical steel plates, and the final group did not undergo operative fracture stabilization (n = 6 for each group). Radiographic, mechanical, and histologic examination was performed to evaluate the effectiveness of the biodegradable material for the stabilization of the rib fractures. No nonunion and no infections were found when using SF-PCL plates. The fracture sites collapsed in the untreated control group, leading to obvious chest wall deformity not encountered in the two groups that underwent operative stabilization. Our experimental study shows that the SF/PCL plate has the biocompatibility and mechanical strength suitable for fixation of MRF and is potentially ideal for the treatment of these injuries. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Multiple models guide strategies for agricultural nutrient reductions

    Scavia, Donald; Kalcic, Margaret; Muenich, Rebecca Logsdon; Read, Jennifer; Aloysius, Noel; Bertani, Isabella; Boles, Chelsie; Confesor, Remegio; DePinto, Joseph; Gildow, Marie; Martin, Jay; Redder, Todd; Robertson, Dale M.; Sowa, Scott P.; Wang, Yu-Chen; Yen, Haw

    2017-01-01

    In response to degraded water quality, federal policy makers in the US and Canada called for a 40% reduction in phosphorus (P) loads to Lake Erie, and state and provincial policy makers in the Great Lakes region set a load-reduction target for the year 2025. Here, we configured five separate SWAT (US Department of Agriculture's Soil and Water Assessment Tool) models to assess load reduction strategies for the agriculturally dominated Maumee River watershed, the largest P source contributing to toxic algal blooms in Lake Erie. Although several potential pathways may achieve the target loads, our results show that any successful pathway will require large-scale implementation of multiple practices. For example, one successful pathway involved targeting 50% of row cropland that has the highest P loss in the watershed with a combination of three practices: subsurface application of P fertilizers, planting cereal rye as a winter cover crop, and installing buffer strips. Achieving these levels of implementation will require local, state/provincial, and federal agencies to collaborate with the private sector to set shared implementation goals and to demand innovation and honest assessments of water quality-related programs, policies, and partnerships.

  6. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI 1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  7. On the representability of complete genomes by multiple competing finite-context (Markov models.

    Armando J Pinho

    Full Text Available A finite-context (Markov model of order k yields the probability distribution of the next symbol in a sequence of symbols, given the recent past up to depth k. Markov modeling has long been applied to DNA sequences, for example to find gene-coding regions. With the first studies came the discovery that DNA sequences are non-stationary: distinct regions require distinct model orders. Since then, Markov and hidden Markov models have been extensively used to describe the gene structure of prokaryotes and eukaryotes. However, to our knowledge, a comprehensive study about the potential of Markov models to describe complete genomes is still lacking. We address this gap in this paper. Our approach relies on (i multiple competing Markov models of different orders (ii careful programming techniques that allow orders as large as sixteen (iii adequate inverted repeat handling (iv probability estimates suited to the wide range of context depths used. To measure how well a model fits the data at a particular position in the sequence we use the negative logarithm of the probability estimate at that position. The measure yields information profiles of the sequence, which are of independent interest. The average over the entire sequence, which amounts to the average number of bits per base needed to describe the sequence, is used as a global performance measure. Our main conclusion is that, from the probabilistic or information theoretic point of view and according to this performance measure, multiple competing Markov models explain entire genomes almost as well or even better than state-of-the-art DNA compression methods, such as XM, which rely on very different statistical models. This is surprising, because Markov models are local (short-range, contrasting with the statistical models underlying other methods, where the extensive data repetitions in DNA sequences is explored, and therefore have a non-local character.

  8. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  9. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  10. Base Isolation for Seismic Retrofitting of a Multiple Building Structure: Evaluation of Equivalent Linearization Method

    Massimiliano Ferraioli

    2016-01-01

    Full Text Available Although the most commonly used isolation systems exhibit nonlinear inelastic behaviour, the equivalent linear elastic analysis is commonly used in the design and assessment of seismic-isolated structures. The paper investigates if the linear elastic model is suitable for the analysis of a seismically isolated multiple building structure. To this aim, its computed responses were compared with those calculated by nonlinear dynamic analysis. A common base isolation plane connects the isolation bearings supporting the adjacent structures. In this situation, the conventional equivalent linear elastic analysis may have some problems of accuracy because this method is calibrated on single base-isolated structures. Moreover, the torsional characteristics of the combined system are significantly different from those of separate isolated buildings. A number of numerical simulations and parametric studies under earthquake excitations were performed. The accuracy of the dynamic response obtained by the equivalent linear elastic model was calculated by the magnitude of the error with respect to the corresponding response considering the nonlinear behaviour of the isolation system. The maximum displacements at the isolation level, the maximum interstorey drifts, and the peak absolute acceleration were selected as the most important response measures. The influence of mass eccentricity, torsion, and high-modes effects was finally investigated.

  11. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  12. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  13. Structural modeling techniques by finite element method

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  14. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  15. A ¤flexible additive multiplicative hazard model

    Martinussen, T.; Scheike, T. H.

    2002-01-01

    Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...

  16. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  17. Computer-Aided Modelling Methods and Tools

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  18. Robust anti-synchronization of uncertain chaotic systems based on multiple-kernel least squares support vector machine modeling

    Chen Qiang; Ren Xuemei; Na Jing

    2011-01-01

    Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.

  19. Modeling of CMUTs with Multiple Anisotropic Layers and Residual Stress

    Engholm, Mathias; Thomsen, Erik Vilain

    2014-01-01

    Usually the analytical approach for modeling CMUTs uses the single layer plate equation to obtain the deflection and does not take anisotropy and residual stress into account. A highly accurate model is developed for analytical characterization of CMUTs taking an arbitrary number of layers...... and residual stress into account. Based on the stress-strain relation of each layer and balancing stress resultants and bending moments, a general multilayered anisotropic plate equation is developed for plates with an arbitrary number of layers. The exact deflection profile is calculated for a circular...... clamped plate of anisotropic materials with residual bi-axial stress. From the deflection shape the critical stress for buckling is calculated and by using the Rayleigh-Ritz method the natural frequency is estimated....

  20. A business case method for business models

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  1. An object-oriented approach to evaluating multiple spectral models

    Majoras, R.E.; Richardson, W.M.; Seymour, R.S.

    1995-01-01

    A versatile, spectroscopy analysis engine has been developed by using object-oriented design and analysis techniques coupled with an object-oriented language, C++. This engine provides the spectroscopist with the choice of several different peak shape models that are tailored to the type of spectroscopy being performed. It also allows ease of development in adapting the engine to other analytical methods requiring more complex peak fitting in the future. This results in a program that can currently be used across a wide range of spectroscopy applications and anticipates inclusion of future advances in the field. (author) 6 refs.; 1 fig

  2. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  3. The hybrid model for sampling multiple elastic scattering angular deflections based on Goudsmit-Saunderson theory

    Wasaye Muhammad Abdul

    2017-01-01

    Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.

  4. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  5. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  6. Comparação de métodos de imputação única e múltipla usando como exemplo um modelo de risco para mortalidade cirúrgica Comparison of simple and multiple imputation methods using a risk model for surgical mortality as example

    Luciana Neves Nunes

    2010-12-01

    sample size was 450 patients. The imputation methods applied were: two single imputations and one multiple imputation and the assumption was MAR (Missing at Random. RESULTS: The variable with missing data was serum albumin with 27.1% of missing rate. The logistic models adjusted by simple imputation were similar, but differed from models obtained by multiple imputation in relation to the inclusion of variables. CONCLUSIONS: The results indicate that it is important to take into account the relationship of albumin to other variables observed, because different models were obtained in single and multiple imputations. Single imputation underestimates the variability generating narrower confidence intervals. It is important to consider the use of imputation methods when there is missing data, especially multiple imputation that takes into account the variability between imputations for estimates of the model.

  7. Exclusive description of multiple production on nuclei in the additive quark model. Multiplicity distributions in interactions with heavy nuclei

    Levchenko, B.B.; Nikolaev, N.N.

    1985-01-01

    In the framework of the additive quark model of multiple production on nuclei we calculate the multiplicity distributions of secondary particles and the correlations between secondary particles in πA and pA interactions with heavy nuclei. We show that intranuclear cascades are responsible for up to 50% of the nuclear increase of the multiplicity of fast particles. We analyze the sensitivity of the multiplicities and their correlations to the choice of the quark-hadronization function. We show that with good accuracy the yield of relativistic secondary particles from heavy and intermediate nuclei depends only on the number N/sub p/ of protons knocked out of the nucleus, and not on the mass number of the nucleus (N/sub p/ scaling)

  8. Comparative study between a QCD inspired model and a multiple diffraction model

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  9. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  10. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  11. Mechatronic Systems Design Methods, Models, Concepts

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  12. Dynamic information architecture system (DIAS) : multiple model simulation management

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-01-01

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers can schedule other events; create or remove Entities from the

  13. Dynamic information architecture system (DIAS) : multiple model simulation management.

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-05-13

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers

  14. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  15. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  16. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  17. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  19. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  20. Verification of road databases using multiple road models

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  1. Modeling the Potential Effects of New Tobacco Products and Policies: A Dynamic Population Model for Multiple Product Use and Harm

    Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.

    2015-01-01

    Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health

  2. Integrating Multiple Geophysical Methods to Quantify Alpine Groundwater- Surface Water Interactions: Cordillera Blanca, Peru

    Glas, R. L.; Lautz, L.; McKenzie, J. M.; Baker, E. A.; Somers, L. D.; Aubry-Wake, C.; Wigmore, O.; Mark, B. G.; Moucha, R.

    2016-12-01

    Groundwater- surface water interactions in alpine catchments are often poorly understood as groundwater and hydrologic data are difficult to acquire in these remote areas. The Cordillera Blanca of Peru is a region where dry-season water supply is increasingly stressed due to the accelerated melting of glaciers throughout the range, affecting millions of people country-wide. The alpine valleys of the Cordillera Blanca have shown potential for significant groundwater storage and discharge to valley streams, which could buffer the dry-season variability of streamflow throughout the watershed as glaciers continue to recede. Known as pampas, the clay-rich, low-relief valley bottoms are interfingered with talus deposits, providing a likely pathway for groundwater recharged at the valley edges to be stored and slowly released to the stream throughout the year by springs. Multiple geophysical methods were used to determine areas of groundwater recharge and discharge as well as aquifer geometry of the pampa system. Seismic refraction tomography, vertical electrical sounding (VES), electrical resistivity tomography (ERT), and horizontal-to-vertical spectral ratio (HVSR) seismic methods were used to determine the physical properties of the unconsolidated valley sediments, the depth to saturation, and the depth to bedrock for a representative section of the Quilcayhuanca Valley in the Cordillera Blanca. Depth to saturation and lithological boundaries were constrained by comparing geophysical results to continuous records of water levels and sediment core logs from a network of seven piezometers installed to depths of up to 6 m. Preliminary results show an average depth to bedrock for the study area of 25 m, which varies spatially along with water table depths across the valley. The conceptual model of groundwater flow and storage derived from these geophysical data will be used to inform future groundwater flow models of the area, allowing for the prediction of groundwater

  3. Predictive model of Amorphophallus muelleri growth in some agroforestry in East Java by multiple regression analysis

    BUDIMAN

    2012-01-01

    Full Text Available Budiman, Arisoesilaningsih E. 2012. Predictive model of Amorphophallus muelleri growth in some agroforestry in East Java by multiple regression analysis. Biodiversitas 13: 18-22. The aims of this research was to determine the multiple regression models of vegetative and corm growth of Amorphophallus muelleri Blume in some age variations and habitat conditions of agroforestry in East Java. Descriptive exploratory research method was conducted by systematic random sampling at five agroforestries on four plantations in East Java: Saradan, Bojonegoro, Nganjuk and Blitar. In each agroforestry, we observed A. muelleri vegetative and corm growth on four growing age (1, 2, 3 and 4 years old respectively as well as environmental variables such as altitude, vegetation, climate and soil conditions. Data were analyzed using descriptive statistics to compare A. muelleri habitat in five agroforestries. Meanwhile, the influence and contribution of each environmental variable to the growth of A. muelleri vegetative and corm were determined using multiple regression analysis of SPSS 17.0. The multiple regression models of A. muelleri vegetative and corm growth were generated based on some characteristics of agroforestries and age showed high validity with R2 = 88-99%. Regression model showed that age, monthly temperatures, percentage of radiation and soil calcium (Ca content either simultaneously or partially determined the growth of A. muelleri vegetative and corm. Based on these models, the A. muelleri corm reached the optimal growth after four years of cultivation and they will be ready to be harvested. Additionally, the soil Ca content should reach 25.3 me.hg-1 as Sugihwaras agroforestry, with the maximal radiation of 60%.

  4. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  6. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  7. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  8. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  9. Modeling of Particle Acceleration at Multiple Shocks Via Diffusive Shock Acceleration: Preliminary Results

    Parker, L. N.; Zank, G. P.

    2013-12-01

    Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  10. Privacy Protection Method for Multiple Sensitive Attributes Based on Strong Rule

    Tong Yi

    2015-01-01

    Full Text Available At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.

  11. Numerical modelling of multiple scattering between two elastical particles

    Bjørnø, Irina; Jensen, Leif Bjørnø

    1998-01-01

    in suspension have been studied extensively since Foldy's formulation of his theory for isotropic scattering by randomly distributed scatterers. However, a number of important problems related to multiple scattering are still far from finding their solutions. A particular, but still unsolved, problem......Multiple acoustical signal interactions with sediment particles in the vicinity of the seabed may significantly change the course of sediment concentration profiles determined by inversion from acoustical backscattering measurements. The scattering properties of high concentrations of sediments...... is the question of proximity thresholds for influence of multiple scattering in terms of particle properties like volume fraction, average distance between particles or other related parameters. A few available experimental data indicate a significance of multiple scattering in suspensions where the concentration...

  12. 231 Using Multiple Regression Analysis in Modelling the Role of ...

    User

    of Internal Revenue, Tourism Bureau and hotel records. The multiple regression .... additional guest facilities such as restaurant, a swimming pool or child care and social function ... and provide good quality service to the public. Conclusion.

  13. Generalized framework for context-specific metabolic model extraction methods

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  14. Optimization Method of Fusing Model Tree into Partial Least Squares

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  15. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  16. Methods and models used in comparative risk studies

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  17. Determination of osteoporosis risk factors using a multiple logistic regression model in postmenopausal Turkish women.

    Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal

    2005-09-01

    To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.

  18. Risk score modeling of multiple gene to gene interactions using aggregated-multifactor dimensionality reduction

    Dai Hongying

    2013-01-01

    Full Text Available Abstract Background Multifactor Dimensionality Reduction (MDR has been widely applied to detect gene-gene (GxG interactions associated with complex diseases. Existing MDR methods summarize disease risk by a dichotomous predisposing model (high-risk/low-risk from one optimal GxG interaction, which does not take the accumulated effects from multiple GxG interactions into account. Results We propose an Aggregated-Multifactor Dimensionality Reduction (A-MDR method that exhaustively searches for and detects significant GxG interactions to generate an epistasis enriched gene network. An aggregated epistasis enriched risk score, which takes into account multiple GxG interactions simultaneously, replaces the dichotomous predisposing risk variable and provides higher resolution in the quantification of disease susceptibility. We evaluate this new A-MDR approach in a broad range of simulations. Also, we present the results of an application of the A-MDR method to a data set derived from Juvenile Idiopathic Arthritis patients treated with methotrexate (MTX that revealed several GxG interactions in the folate pathway that were associated with treatment response. The epistasis enriched risk score that pooled information from 82 significant GxG interactions distinguished MTX responders from non-responders with 82% accuracy. Conclusions The proposed A-MDR is innovative in the MDR framework to investigate aggregated effects among GxG interactions. New measures (pOR, pRR and pChi are proposed to detect multiple GxG interactions.

  19. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  20. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  1. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  2. Novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs

    Muhammad, Akram; Musavarah, Sarwar

    2016-01-01

    In this research study, we introduce the concept of bipolar neutrosophic graphs. We present the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs. We also develop an algorithm for computing domination in bipolar neutrosophic graphs.

  3. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  4. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  5. Non-Abelian Kubo formula and the multiple time-scale method

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  6. Creep compliance and percent recovery of Oklahoma certified binder using the multiple stress recovery (MSCR) method.

    2015-04-01

    A laboratory study was conducted to develop guidelines for the Multiple Stress Creep Recovery : (MSCR) test method for local conditions prevailing in Oklahoma. The study consisted of : commonly used binders in Oklahoma, namely PG 64-22, PG 70-28, and...

  7. An Additive-Multiplicative Cox-Aalen Regression Model

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  8. Uncertainty and Preference Modelling for Multiple Criteria Vehicle Evaluation

    Qiuping Yang

    2010-12-01

    Full Text Available A general framework for vehicle assessment is proposed based on both mass survey information and the evidential reasoning (ER approach. Several methods for uncertainty and preference modeling are developed within the framework, including the measurement of uncertainty caused by missing information, the estimation of missing information in original surveys, the use of nonlinear functions for data mapping, and the use of nonlinear functions as utility function to combine distributed assessments into a single index. The results of the investigation show that various measures can be used to represent the different preferences of decision makers towards the same feedback from respondents. Based on the ER approach, credible and informative analysis can be conducted through the complete understanding of the assessment problem in question and the full exploration of available information.

  9. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  10. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  11. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  12. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  13. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  14. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  15. Laboratory model study of newly deposited dredger fills using improved multiple-vacuum preloading technique

    Jingjin Liu

    2017-10-01

    Full Text Available Problems continue to be encountered concerning the traditional vacuum preloading method in field during the treatment of newly deposited dredger fills. In this paper, an improved multiple-vacuum preloading method was developed to consolidate newly dredger fills that are hydraulically placed in seawater for land reclamation in Lingang Industrial Zone of Tianjin City, China. With this multiple-vacuum preloading method, the newly deposited dredger fills could be treated effectively by adopting a novel moisture separator and a rapid improvement technique without sand cushion. A series of model tests was conducted in the laboratory for comparing the results from the multiple-vacuum preloading method and the traditional one. Ten piezometers and settlement plates were installed to measure the variations in excess pore water pressures and moisture content, and vane shear strength was measured at different positions. The testing results indicate that water discharge–time curves obtained by the traditional vacuum preloading method can be divided into three phases: rapid growth phase, slow growth phase, and steady phase. According to the process of fluid flow concentrated along tiny ripples and building of larger channels inside soils during the whole vacuum loading process, the fluctuations of pore water pressure during each loading step are divided into three phases: steady phase, rapid dissipation phase, and slow dissipation phase. An optimal loading pattern which could have a best treatment effect was proposed for calculating the water discharge and pore water pressure of soil using the improved multiple-vacuum preloading method. For the newly deposited dredger fills at Lingang Industrial Zone of Tianjin City, the best loading step was 20 kPa and the loading of 40–50 kPa produced the highest drainage consolidation. The measured moisture content and vane shear strength were discussed in terms of the effect of reinforcement, both of which indicate

  16. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  17. The implementation of multiple intelligences based teaching model to improve mathematical problem solving ability for student of junior high school

    Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli

    2017-05-01

    This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.

  18. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  19. The multiple deficit model of dyslexia: what does it mean for identification and intervention?

    Ring, Jeremiah; Black, Jeffrey L

    2018-04-24

    Research demonstrates that phonological skills provide the basis of reading acquisition and are a primary processing deficit in dyslexia. This consensus has led to the development of effective methods of reading intervention. However, a single phonological deficit is not sufficient to account for the heterogeneity of individuals with dyslexia, and recent research provides evidence that supports a multiple-deficit model of reading disorders. Two studies are presented that investigate (1) the prevalence of phonological and cognitive processing deficit profiles in children with significant reading disability and (2) the effects of those same phonological and cognitive processing skills on reading development in a sample of children that received treatment for dyslexia. The results are discussed in the context of implications for identification and an intervention approach that accommodates multiple deficits within a comprehensive skills-based reading program.

  20. Tools and Models for Integrating Multiple Cellular Networks

    Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.

    2015-11-06

    In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed

  1. Lyssavirus infection: 'low dose, multiple exposure' in the mouse model.

    Banyard, Ashley C; Healy, Derek M; Brookes, Sharon M; Voller, Katja; Hicks, Daniel J; Núñez, Alejandro; Fooks, Anthony R

    2014-03-06

    The European bat lyssaviruses (EBLV-1 and EBLV-2) are zoonotic pathogens present within bat populations across Europe. The maintenance and transmission of lyssaviruses within bat colonies is poorly understood. Cases of repeated isolation of lyssaviruses from bat roosts have raised questions regarding the maintenance and intraspecies transmissibility of these viruses within colonies. Furthermore, the significance of seropositive bats in colonies remains unclear. Due to the protected nature of European bat species, and hence restrictions to working with the natural host for lyssaviruses, this study analysed the outcome following repeat inoculation of low doses of lyssaviruses in a murine model. A standardized dose of virus, EBLV-1, EBLV-2 or a 'street strain' of rabies (RABV), was administered via a peripheral route to attempt to mimic what is hypothesized as natural infection. Each mouse (n=10/virus/group/dilution) received four inoculations, two doses in each footpad over a period of four months, alternating footpad with each inoculation. Mice were tail bled between inoculations to evaluate antibody responses to infection. Mice succumbed to infection after each inoculation with 26.6% of mice developing clinical disease following the initial exposure across all dilutions (RABV, 32.5% (n=13/40); EBLV-1, 35% (n=13/40); EBLV-2, 12.5% (n=5/40)). Interestingly, the lowest dose caused clinical disease in some mice upon first exposure ((RABV, 20% (n=2/10) after first inoculation; RABV, 12.5% (n=1/8) after second inoculation; EBLV-2, 10% (n=1/10) after primary inoculation). Furthermore, five mice developed clinical disease following the second exposure to live virus (RABV, n=1; EBLV-1, n=1; EBLV-2, n=3) although histopathological examination indicated that the primary inoculation was the most probably cause of death due to levels of inflammation and virus antigen distribution observed. All the remaining mice (RABV, n=26; EBLV-1, n=26; EBLV-2, n=29) survived the tertiary and

  2. Analysis of underlying and multiple-cause mortality data: the life table methods.

    Moussa, M A

    1987-02-01

    The stochastic compartment model concepts are employed to analyse and construct complete and abbreviated total mortality life tables, multiple-decrement life tables for a disease, under the underlying and pattern-of-failure definitions of mortality risk, cause-elimination life tables, cause-elimination effects on saved population through the gain in life expectancy as a consequence of eliminating the mortality risk, cause-delay life tables designed to translate the clinically observed increase in survival time as the population gain in life expectancy that would occur if a treatment protocol was made available to the general population and life tables for disease dependency in multiple-cause data.

  3. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within

  4. Model Uncertainty Quantification Methods In Data Assimilation

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  5. Rapid installation of numerical models in multiple parent codes

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  6. Billing for pharmacists' cognitive services in physicians' offices: multiple methods of reimbursement.

    Scott, Mollie Ashe; Hitch, William J; Wilson, Courtenay Gilmore; Lugo, Amy M

    2012-01-01

    To evaluate the charges and reimbursement for pharmacist services using multiple methods of billing and determine the number of patients that must be managed by a pharmacist to cover the cost of salary and fringe benefits. Large teaching ambulatory clinic in North Carolina. Annual charges and reimbursement, patient no-show rate, clinic capacity, number of patients seen monthly and annually, and number of patients that must be seen to pay for a pharmacist's salary and benefits. A total of 6,930 patient encounters were documented during the study period. Four different clinics were managed by the pharmacists, including anticoagulation, pharmacotherapy, osteoporosis, and wellness clinics. "Incident to" level 1 billing was used for the anticoagulation and pharmacotherapy clinics, whereas level 4 codes were used for the osteoporosis clinic. The wellness clinic utilized a negotiated fee-for-service model. Mean annual charges were $65,022, and the mean reimbursement rate was 47%. The mean charge and collection per encounter were $41 and $19, respectively. Eleven encounters per day were necessary to generate enough charges to pay for the cost of the pharmacist. Considering actual reimbursement rates, the number of patient encounters necessary increased to 24 per day. "What if" sensitivity analysis indicated that billing at the level of service provided instead of level 1 decreased the number of patients needed to be seen daily. Billing a level 4 visit necessitated that five patients would need to be seen daily to generate adequate charges. Taking into account the 47% reimbursement rate, 10 level 4 encounters per day were necessary to generate appropriate reimbursement to pay for the pharmacist. Unique opportunities for pharmacists to provide direct patient care in the ambulatory setting continue to develop. Use of a combination of billing methods resulted in sustainable reimbursement. The ability to bill at the level of service provided instead of a level 1 visit would

  7. MULTIPLE OBJECTS

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  8. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  9. A Method for Model Checking Feature Interactions

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  10. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  11. Structural equation modeling methods and applications

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  12. A Hidden Markov Model Representing the Spatial and Temporal Correlation of Multiple Wind Farms

    Fang, Jiakun; Su, Chi; Hu, Weihao

    2015-01-01

    To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps is ado....... The proposed statistical modeling framework is compatible with the sequential power system reliability analysis. A case study on optimal sizing and location of fast-response regulation sources is presented.......To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps...... is adopted to categorize the similar output patterns of several wind farms into joint states. Then the hidden Markov model (HMM) is then designed to describe the temporal correlations among these joint states. Unlike the conventional Markov chain model, the accumulated wind power is taken into consideration...

  13. Waste generated in high-rise buildings construction: a quantification model based on statistical multiple regression.

    Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana

    2015-05-01

    Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A method for determining the analytical form of a radionuclide depth distribution using multiple gamma spectrometry measurements

    Dewey, Steven Clifford, E-mail: sdewey001@gmail.com [United States Air Force School of Aerospace Medicine, Occupational Environmental Health Division, Health Physics Branch, Radiation Analysis Laboratories, 2350 Gillingham Drive, Brooks City-Base, TX 78235 (United States); Whetstone, Zachary David, E-mail: zacwhets@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, Kimberlee Jane, E-mail: kearfott@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)

    2011-06-15

    When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. - Highlights: > A new method for determining radionuclide distribution as a function of depth is presented. > Multiple measurements are used, with enough measurements to determine the unknowns in analytical functions that might describe the distribution. > The measurements must be as independent as possible, which is achieved through special collimation of the detector. > Although the effects of measurements errors may be significant on the results, an improvement over other methods is anticipated.

  15. Stochastic modeling of pitting corrosion: A new model for initiation and growth of multiple corrosion pits

    Valor, A.; Caleyo, F.; Alfonso, L.; Rivas, D.; Hallen, J.M.

    2007-01-01

    In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion

  16. Stochastic modeling of pitting corrosion: A new model for initiation and growth of multiple corrosion pits

    Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 Havana (Cuba); Caleyo, F. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)]. E-mail: fcaleyo@gmail.com; Alfonso, L. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Rivas, D. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Hallen, J.M. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)

    2007-02-15

    In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion.

  17. Stencil method: a Markov model for transport in porous media

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  18. Investigation of colistin sensitivity via three different methods in Acinetobacter baumannii isolates with multiple antibiotic resistance.

    Sinirtaş, Melda; Akalin, Halis; Gedikoğlu, Suna

    2009-09-01

    In recent years there has been an increase in life-threatening infections caused by Acinetobacter baumannii with multiple antibiotic resistance, which has lead to the use of polymyxins, especially colistin, being reconsidered. The aim of this study was to investigate the colistin sensitivity of A. baumannii isolates with multiple antibiotic resistance via different methods, and to evaluate the disk diffusion method for colistin against multi-resistant Acinetobacter isolates, in comparison to the E-test and Phoenix system. The study was carried out on 100 strains of A. baumannii (colonization or infection) isolated from the microbiological samples of different patients followed in the clinics and intensive care units of Uludağ University Medical School between the years 2004 and 2005. Strains were identified and characterized for their antibiotic sensitivity by Phoenix system (Becton Dickinson, Sparks, MD, USA). In all studied A. baumannii strains, susceptibility to colistin was determined to be 100% with the disk diffusion, E-test, and broth microdilution methods. Results of the E-test and broth microdilution method, which are accepted as reference methods, were found to be 100% consistent with the results of the disk diffusion tests; no very major or major error was identified upon comparison of the tests. The sensitivity and the positive predictive value of the disk diffusion method were found to be 100%. Colistin resistance in A. baumannii was not detected in our region, and disk diffusion method results are in accordance with those of E-test and broth microdilution methods.

  19. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  20. Mathematical model quantifies multiple daylight exposure and burial events for rock surfaces using luminescence dating

    Freiesleben, Trine Holm; Sohbati, Reza; Murray, Andrew

    2015-01-01

    Interest in the optically stimulated luminescence (OSL) dating of rock surfaces has increased significantly over the last few years, as the potential of the method has been explored. It has been realized that luminescence-depth profiles show qualitative evidence for multiple daylight exposure...... and burial events. To quantify both burial and exposure events a new mathematical model is developed by expanding the existing models of evolution of luminescenceedepth profiles, to include repeated sequential events of burial and exposure to daylight. This new model is applied to an infrared stimulated...... events. This study confirms the suggestion that rock surfaces contain a record of exposure and burial history, and that these events can be quantified. The burial age of rock surfaces can thus be dated with confidence, based on a knowledge of their pre-burial light exposure; it may also be possible...