WorldWideScience

Sample records for two-stage parallel model

  1. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  2. Research on Two-channel Interleaved Two-stage Paralleled Buck DC-DC Converter for Plasma Cutting Power Supply

    DEFF Research Database (Denmark)

    Yang, Xi-jun; Qu, Hao; Yao, Chen

    2014-01-01

    As for high power plasma power supply, due to high efficiency and flexibility, multi-channel interleaved multi-stage paralleled Buck DC-DC Converter becomes the first choice. In the paper, two-channel interleaved two- stage paralleled Buck DC-DC Converter powered by three-phase AC power supply...

  3. Design and control of a decoupled two degree of freedom translational parallel micro-positioning stage.

    Science.gov (United States)

    Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min

    2012-04-01

    This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics

  4. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  5. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  6. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine, NASA Advanced Air Vehicles Program - Commercial Supersonic Technology Project - AeroServoElasticity

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  7. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    Science.gov (United States)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  8. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  9. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  10. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  11. Parallel two-phase-flow-induced vibrations in fuel pin model

    International Nuclear Information System (INIS)

    Hara, Fumio; Yamashita, Tadashi

    1978-01-01

    This paper reports the experimental results of vibrations of a fuel pin model -herein meaning the essential form of a fuel pin from the standpoint of vibration- in a parallel air-and-water two-phase flow. The essential part of the experimental apparatus consisted of a flat elastic strip made of stainless steel, both ends of which were firmly supported in a circular channel conveying the two-phase fluid. Vibrational strain of the fuel pin model, pressure fluctuation of the two-phase flow and two-phase-flow void signals were measured. Statistical measures such as power spectral density, variance and correlation function were calculated. The authors obtained (1) the relation between variance of vibrational strain and two-phase-flow velocity, (2) the relation between variance of vibrational strain and two-phase-flow pressure fluctuation, (3) frequency characteristics of variance of vibrational strain against the dominant frequency of the two-phase-flow pressure fluctuation, and (4) frequency characteristics of variance of vibrational strain against the dominant frequency of two-phase-flow void signals. The authors conclude that there exist two kinds of excitation mechanisms in vibrations of a fuel pin model inserted in a parallel air-and-water two-phase flow; namely, (1) parametric excitation, which occurs when the fundamental natural frequency of the fuel pin model is related to the dominant travelling frequency of water slugs in the two-phase flow by the ratio 1/2, 1/1, 3/2 and so on; and (2) vibrational resonance, which occurs when the fundamental frequency coincides with the dominant frequency of the two-phase-flow pressure fluctuation. (auth.)

  12. Analysis and Modeling of Circulating Current in Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand

    2015-01-01

    Parallel-connected inverters are gaining attention for high power applications because of the limited power handling capability of the power modules. Moreover, the parallel-connected inverters may have low total harmonic distortion of the ac current if they are operated with the interleaved pulse...... this model, the circulating current between two parallel-connected inverters is analysed in this study. The peak and root mean square (rms) values of the normalised circulating current are calculated for different PWM methods, which makes this analysis a valuable tool to design a filter for the circulating......-width modulation (PWM). However, the interleaved PWM causes a circulating current between the inverters, which in turn causes additional losses. A model describing the dynamics of the circulating current is presented in this study which shows that the circulating current depends on the common-mode voltage. Using...

  13. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  14. Two Phase Flow Split Model for Parallel Channels | Iloeje | Nigerian ...

    African Journals Online (AJOL)

    The model and code are capable of handling single and two phase flows, steady states and transients, up to ten parallel flow paths, simple and complicated geometries, including the boilers of fossil steam generators and nuclear power plants. A test calculation has been made with a simplified three-channel system ...

  15. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  16. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  18. Incorrectness of conventional one-dimensional parallel thermal resistance circuit model for two-dimensional circular composite pipes

    International Nuclear Information System (INIS)

    Wong, K.-L.; Hsien, T.-L.; Chen, W.-L.; Yu, S.-J.

    2008-01-01

    This study is to prove that two-dimensional steady state heat transfer problems of composite circular pipes cannot be appropriately solved by the conventional one-dimensional parallel thermal resistance circuits (PTRC) model because its interface temperatures are not unique. Thus, the PTRC model is definitely different from its conventional recognized analogy, parallel electrical resistance circuits (PERC) model, which has unique node electric voltages. Two typical composite circular pipe examples are solved by CFD software, and the numerical results are compared with those obtained by the PTRC model. This shows that the PTRC model generates large error. Thus, this conventional model, introduced in most heat transfer text books, cannot be applied to two-dimensional composite circular pipes. On the contrary, an alternative one-dimensional separately series thermal resistance circuit (SSTRC) model is proposed and applied to a two-dimensional composite circular pipe with isothermal boundaries, and acceptable results are returned

  19. The inaccuracy of conventional one-dimensional parallel thermal resistance circuit model for two-dimensional composite walls

    International Nuclear Information System (INIS)

    Wong, K.-L.; Hsien, T.-L.; Hsiao, M.-C.; Chen, W.-L.; Lin, K.-C.

    2008-01-01

    This investigation is to show that two-dimensional steady state heat transfer problems of composite walls should not be solved by the conventionally one-dimensional parallel thermal resistance circuits (PTRC) model because the interface temperatures are not unique. Thus PTRC model cannot be used like its conventional recognized analogy, parallel electrical resistance circuits (PERC) model which has the unique node electric voltage. Two typical composite wall examples, solved by CFD software, are used to demonstrate the incorrectness. The numerical results are compared with those obtained by PTRC model, and very large differences are observed between their results. This proves that the application of conventional heat transfer PTRC model to two-dimensional composite walls, introduced in most heat transfer text book, is totally incorrect. An alternative one-dimensional separately series thermal resistance circuit (SSTRC) model is proposed and applied to the two-dimensional composite walls with isothermal boundaries. Results with acceptable accuracy can be obtained by the new model

  20. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  1. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  2. THE MATHEMATICAL MODEL DEVELOPMENT OF THE ETHYLBENZENE DEHYDROGENATION PROCESS KINETICS IN A TWO-STAGE ADIABATIC CONTINUOUS REACTOR

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2015-01-01

    Full Text Available The article is devoted to the mathematical modeling of the kinetics of ethyl benzene dehydrogenation in a two-stage adiabatic reactor with a catalytic bed functioning on continuous technology. The analysis of chemical reactions taking place parallel to the main reaction of styrene formation has been carried out on the basis of which a number of assumptions were made proceeding from which a kinetic scheme describing the mechanism of the chemical reactions during the dehydrogenation process was developed. A mathematical model of the dehydrogenation process, describing the dynamics of chemical reactions taking place in each of the two stages of the reactor block at a constant temperature is developed. The estimation of the rate constants of direct and reverse reactions of each component, formation and exhaustion of the reacted mixture was made. The dynamics of the starting material concentration variations (ethyl benzene batch was obtained as well as styrene formation dynamics and all byproducts of dehydrogenation (benzene, toluene, ethylene, carbon, hydrogen, ect.. The calculated the variations of the component composition of the reaction mixture during its passage through the first and second stages of the reactor showed that the proposed mathematical description adequately reproduces the kinetics of the process under investigation. This demonstrates the advantage of the developed model, as well as loyalty to the values found for the rate constants of reactions, which enable the use of models for calculating the kinetics of ethyl benzene dehydrogenation under nonisothermal mode in order to determine the optimal temperature trajectory of the reactor operation. In the future, it will reduce energy and resource consumption, increase the volume of produced styrene and improve the economic indexes of the process.

  3. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  4. A comprehensive study of task coalescing for selecting parallelism granularity in a two-stage bidiagonal reduction

    KAUST Repository

    Haidar, Azzam

    2012-05-01

    We present new high performance numerical kernels combined with advanced optimization techniques that significantly increase the performance of parallel bidiagonal reduction. Our approach is based on developing efficient fine-grained computational tasks as well as reducing overheads associated with their high-level scheduling during the so-called bulge chasing procedure that is an essential phase of a scalable bidiagonalization procedure. In essence, we coalesce multiple tasks in a way that reduces the time needed to switch execution context between the scheduler and useful computational tasks. At the same time, we maintain the crucial information about the tasks and their data dependencies between the coalescing groups. This is the necessary condition to preserve numerical correctness of the computation. We show our annihilation strategy based on multiple applications of single orthogonal reflectors. Despite non-trivial characteristics in computational complexity and memory access patterns, our optimization approach smoothly applies to the annihilation scenario. The coalescing positively influences another equally important aspect of the bulge chasing stage: the memory reuse. For the tasks within the coalescing groups, the data is retained in high levels of the cache hierarchy and, as a consequence, operations that are normally memory-bound increase their ratio of computation to off-chip communication and become compute-bound which renders them amenable to efficient execution on multicore architectures. The performance for the new two-stage bidiagonal reduction is staggering. Our implementation results in up to 50-fold and 12-fold improvement (∼130 Gflop/s) compared to the equivalent routines from LAPACK V3.2 and Intel MKL V10.3, respectively, on an eight socket hexa-core AMD Opteron multicore shared-memory system with a matrix size of 24000 x 24000. Last but not least, we provide a comprehensive study on the impact of the coalescing group size in terms of cache

  5. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  6. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  7. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  8. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  9. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  10. A two-phase inspection model for a single component system with three-stage degradation

    International Nuclear Information System (INIS)

    Wang, Huiying; Wang, Wenbin; Peng, Rui

    2017-01-01

    This paper presents a two-phase inspection schedule and an age-based replacement policy for a single plant item contingent on a three-stage degradation process. The two phase inspection schedule can be observed in practice. The three stages are defined as the normal working stage, low-grade defective stage and critical defective stage. When an inspection detects that an item is in the low-grade defective stage, we may delay the preventive replacement action if the time to the age-based replacement is less than or equal to a threshold level. However, if it is above this threshold level, the item will be replaced immediately. If the item is found in the critical defective stage, it is replaced immediately. A hybrid bee colony algorithm is developed to find the optimal solution for the proposed model which has multiple decision variables. A numerical example is conducted to show the efficiency of this algorithm, and simulations are conducted to verify the correctness of the model. - Highlights: • A two-phase inspection model is studied. • The failure process has three stages. • The delayed replacement is considered.

  11. Coupled Model of channels in parallel and neutron kinetics in two dimensions

    International Nuclear Information System (INIS)

    Cecenas F, M.; Campos G, R.M.; Valle G, E. del

    2004-01-01

    In this work an arrangement of thermohydraulic channels is presented that represent those four quadrants of a nucleus of reactor type BWR. The channels are coupled to a model of neutronic in two dimensions that allow to generate the radial profile of power of the reactor. Nevertheless that the neutronic pattern is of two dimensions, it is supplemented with axial additional information when considering the axial profiles of power for each thermo hydraulic channel. The stationary state is obtained the one it imposes as frontier condition the same pressure drop for all the channels. This condition is satisfied to iterating on the flow of coolant in each channel to equal the pressure drop in all the channels. This stationary state is perturbed later on when modifying the values for the effective sections corresponding to an it assembles. The calculation in parallel of the neutronic and the thermo hydraulic is carried out with Vpm (Virtual parallel machine) by means of an outline teacher-slave in a local net of computers. (Author)

  12. Transfer function modeling of parallel connected two three-phase induction motor implementation using LabView platform

    DEFF Research Database (Denmark)

    Gunabalan, R.; Sanjeevikumar, P.; Blaabjerg, Frede

    2015-01-01

    This paper presents the transfer function modeling and stability analysis of two induction motors of same ratings and parameters connected in parallel. The induction motors are controlled by a single inverter and the entire drive system is modeled using transfer function in LabView. Further...

  13. A two stage data envelopment analysis model with undesirable output

    Science.gov (United States)

    Shariff Adli Aminuddin, Adam; Izzati Jaini, Nur; Mat Kasim, Maznah; Nawawi, Mohd Kamal Mohd

    2017-09-01

    The dependent relationship among the decision making units (DMU) is usually assumed to be non-existent in the development of Data Envelopment Analysis (DEA) model. The dependency can be represented by the multi-stage DEA model, where the outputs from the precedent stage will be the inputs for the latter stage. The multi-stage DEA model evaluate both the efficiency score for each stages and the overall efficiency of the whole process. The existing multi stage DEA models do not focus on the integration with the undesirable output, in which the higher input will generate lower output unlike the normal desirable output. This research attempts to address the inclusion of such undesirable output and investigate the theoretical implication and potential application towards the development of multi-stage DEA model.

  14. Two-Stage Battery Energy Storage System (BESS in AC Microgrids with Balanced State-of-Charge and Guaranteed Small-Signal Stability

    Directory of Open Access Journals (Sweden)

    Bing Xie

    2018-02-01

    Full Text Available In this paper, a two-stage battery energy storage system (BESS is implemented to enhance the operation condition of conventional battery storage systems in a microgrid. Particularly, the designed BESS is composed of two stages, i.e., Stage I: integration of dispersed energy storage units (ESUs using parallel DC/DC converters, and Stage II: aggregated ESUs in grid-connected operation. Different from a conventional BESS consisting of a battery management system (BMS and power conditioning system (PCS, the developed two-stage architecture enables additional operation and control flexibility in balancing the state-of-charge (SoC of each ESU and ensures the guaranteed small-signal stability, especially in extremely weak grid conditions. The above benefits are achieved by separating the control functions between the two stages. In Stage I, a localized power sharing scheme based on the SoC of each particular ESU is developed to manage the SoC and avoid over-charge or over-discharge issues; on the other hand, in Stage II, an additional virtual impedance loop is implemented in the grid-interactive DC/AC inverters to enhance the stability margin with multiple parallel-connected inverters integrating at the point of common coupling (PCC simultaneously. A simulation model based on MATLAB/Simulink is established, and simulation results verify the effectiveness of the proposed BESS architecture and the corresponding control diagram.

  15. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  16. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  17. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  18. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  19. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  20. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  1. Analytical model for vibration prediction of two parallel tunnels in a full-space

    Science.gov (United States)

    He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui

    2018-06-01

    This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.

  2. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  3. A coupling model for the two-stage core calculation method with subchannel analysis for boiling water reactors

    International Nuclear Information System (INIS)

    Mitsuyasu, Takeshi; Aoyama, Motoo; Yamamoto, Akio

    2017-01-01

    Highlights: • A coupling model of the two-stage core calculation with subchannel analysis. • BWR fuel assembly parameters are assumed and verified. • The model was evaluated for heterogeneous problems. - Abstract: The two-stage core analysis method is widely used for BWR core analysis. The purpose of this study is to develop a core analysis model coupled with subchannel analysis within the two-stage calculation scheme using an assembly-based thermal-hydraulics calculation in the core analysis. The model changes the 2D lattice physics scheme, and couples with 3D subchannel analysis which evaluates the thermal-hydraulics characteristics within the coolant flow area divided as some subchannel regions. In order to couple with these two analyses, some BWR fuel assembly parameters are assumed and verified. The developed model is evaluated for the heterogeneous problem with and without a control rod. The present model is especially effective for the control rod inserted condition. The present model can incorporate the subchannel effect into the current two-stage core calculation method.

  4. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  5. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  6. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  7. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    Science.gov (United States)

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.

  8. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  9. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  10. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  11. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  12. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    Science.gov (United States)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  13. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  14. On a model of three-dimensional bursting and its parallel implementation

    Science.gov (United States)

    Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.

    2008-04-01

    A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.

  15. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  16. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  17. Assessing vanadium and arsenic exposure of people living near a petrochemical complex with two-stage dispersion models

    International Nuclear Information System (INIS)

    Chio, Chia-Pin; Yuan, Tzu-Hsuen; Shie, Ruei-Hao; Chan, Chang-Chuan

    2014-01-01

    Highlights: • Two-stage dispersion models can estimate exposures to hazardous air pollutants. • Spatial distribution of V levels is derived for sources without known emission rates. • A distance-to-source gradient is found for V levels from a petrochemical complex. • Two-stage dispersion is useful for modeling air pollution in resource-limited areas. - Abstract: The goal of this study is to demonstrate that it is possible to construct a two-stage dispersion model empirically for the purpose of estimating air pollution levels in the vicinity of petrochemical plants. We studied oil refineries and coal-fired power plants in the No. 6 Naphtha Cracking Complex, an area of 2,603-ha situated on the central west coast of Taiwan. The pollutants targeted were vanadium (V) from oil refineries and arsenic (As) from coal-fired power plants. We applied a backward fitting method to determine emission rates of V and As, with 192 PM 10 filters originally collected between 2009 and 2012. Our first-stage model estimated emission rates of V and As (median and 95% confidence intervals at 0.0202 (0.0040–0.1063) and 0.1368 (0.0398–0.4782) g/s, respectively. In our second stage model, the predicted zone-average concentrations showed a strong correlation with V, but a poor correlation with As. Our findings show that two-stage dispersion models are relatively precise for estimating V levels at residents’ addresses near the petrochemical complex, but they did not work as well for As levels. In conclusion, our model-based approach can be widely used for modeling exposure to air pollution from industrial areas in countries with limited resources

  18. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  19. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  20. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  1. Two-fluid and parallel compressibility effects in tokamak plasmas

    International Nuclear Information System (INIS)

    Sugiyama, L.E.; Park, W.

    1998-01-01

    The MHD, or single fluid, model for a plasma has long been known to provide a surprisingly good description of much of the observed nonlinear dynamics of confined plasmas, considering its simple nature compared to the complexity of the real system. On the other hand, some of the supposed agreement arises from the lack of the detailed measurements that are needed to distinguish MHD from more sophisticated models that incorporate slower time scale processes. At present, a number of factors combine to make models beyond MHD of practical interest. Computational considerations still favor fluid rather than particle models for description of the full plasma, and suggest an approach that starts from a set of fluid-like equations that extends MHD to slower time scales and more accurate parallel dynamics. This paper summarizes a set of two-fluid equations for toroidal (tokamak) geometry that has been developed and tested as the MH3D-T code [1] and some results from the model. The electrons and ions are described as separate fluids. The code and its original MHD version, MH3D [2], are the first numerical, initial value models in toroidal geometry that include the full 3D (fluid) compressibility and electromagnetic effects. Previous nonlinear MHD codes for toroidal geometry have, in practice, neglected the plasma density evolution, on the grounds that MHD plasmas are only weakly compressible and that the background density variation is weaker than the temperature variation. Analytically, the common use of toroidal plasma models based on aspect ratio expansion, such as reduced MHD, has reinforced this impression, since this ordering reduces plasma compressibility effects. For two-fluid plasmas, the density evolution cannot be neglected in principle, since it provides the basic driving energy for the diamagnetic drifts of the electrons and ions perpendicular to the magnetic field. It also strongly influences the parallel dynamics, in combination with the parallel thermal

  2. Parallel solutions of the two-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, K.S.; Turinsky, P.J.

    1987-01-01

    Recent efforts to adapt various numerical solution algorithms to parallel computer architectures have addressed the possibility of substantially reducing the running time of few-group neutron diffusion calculations. The authors have developed an efficient iterative parallel algorithm and an associated computer code for the rapid solution of the finite difference method representation of the two-group neutron diffusion equations on the CRAY X/MP-48 supercomputer having multi-CPUs and vector pipelines. For realistic simulation of light water reactor cores, the code employees a macroscopic depletion model with trace capability for selected fission product transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the code. The validity of the physics models used in the code were benchmarked against qualified codes and proved accurate. This work is an extension of previous work in that various feedback effects are accounted for in the system; the entire code is structured to accommodate extensive vectorization; and an additional parallelism by multitasking is achieved not only for the solution of the matrix equations associated with the inner iterations but also for the other segments of the code, e.g., outer iterations

  3. Impact of two-stage turbocharging architectures on pumping losses of automotive engines based on an analytical model

    International Nuclear Information System (INIS)

    Galindo, J.; Serrano, J.R.; Climent, H.; Varnier, O.

    2010-01-01

    Present work presents an analytical study of two-stage turbocharging configuration performance. The aim of this work is to understand the influence of different two-stage-architecture parameters to optimize the use of exhaust manifold gases energy and to aid decision making process. An analytical model giving the relationship between global compression ratio and global expansion ratio is developed as a function of basic engine and turbocharging system parameters. Having an analytical solution, the influence of different variables, such as expansion ratio between HP and LP turbine, intercooler efficiency, turbochargers efficiency, cooling fluid temperature and exhaust temperature are studied independently. Engine simulations with proposed analytical model have been performed to analyze the influence of these different parameters on brake thermal efficiency and pumping mean effective pressure. The results obtained show the overall performance of the two-stage system for the whole operative range and characterize the optimum control of the elements for each operative condition. The model was also used to compare single-stage and two-stage architectures performance for the same engine operative conditions. Benefits and limits in terms of breathing capabilities and brake thermal efficiency of each type of system have been presented and analyzed.

  4. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Science.gov (United States)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  5. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  6. Plastic collapse behavior for thin tube with two parallel cracks

    International Nuclear Information System (INIS)

    Moon, Seong In; Chang, Yoon Suk; Kim, Young Jin; Lee, Jin Ho; Song, Myung Ho; Choi, Young Hwan; Kim, Joung Soo

    2004-01-01

    The current plugging criterion is known to be too conservative for some locations and types of defects. Many defects detected during in-service inspection take on the form of multiple cracks at the top of tube sheet but there is no reliable plugging criterion for the steam generator tubes with multiple cracks. Most of the previous studies on multiple cracks are confined to elastic analyses and only few studies have been done on the steam generator tubes failed by plastic collapse. Therefore, it is necessary to develop models which can be used to estimate the failure behavior of steam generator tubes with multiple cracks. The objective of this study is to verify the applicability of the optimum local failure prediction models proposed in the previous study. For this, plastic collapse tests are performed with the tube specimens containing two parallel through-wall cracks. The plastic collapse load of the steam generator tubes containing two parallel through-wall cracks are also estimated by using the proposed optimum global failure model and the applicability is investigated by comparing the estimated results with the experimental results. Also, the interaction effect between two cracks was evaluated to explain the plastic collapse behavior

  7. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  8. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Directory of Open Access Journals (Sweden)

    J. Chardon

    2018-01-01

    Full Text Available Statistical downscaling models (SDMs are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  9. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  10. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  11. Intelligent spatial ecosystem modeling using parallel processors

    International Nuclear Information System (INIS)

    Maxwell, T.; Costanza, R.

    1993-01-01

    Spatial modeling of ecosystems is essential if one's modeling goals include developing a relatively realistic description of past behavior and predictions of the impacts of alternative management policies on future ecosystem behavior. Development of these models has been limited in the past by the large amount of input data required and the difficulty of even large mainframe serial computers in dealing with large spatial arrays. These two limitations have begun to erode with the increasing availability of remote sensing data and GIS systems to manipulate it, and the development of parallel computer systems which allow computation of large, complex, spatial arrays. Although many forms of dynamic spatial modeling are highly amenable to parallel processing, the primary focus in this project is on process-based landscape models. These models simulate spatial structure by first compartmentalizing the landscape into some geometric design and then describing flows within compartments and spatial processes between compartments according to location-specific algorithms. The authors are currently building and running parallel spatial models at the regional scale for the Patuxent River region in Maryland, the Everglades in Florida, and Barataria Basin in Louisiana. The authors are also planning a project to construct a series of spatially explicit linked ecological and economic simulation models aimed at assessing the long-term potential impacts of global climate change

  12. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  13. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Gaurav [Department of Mechanical Engineering, The University of Akron, Akron, OH 44325 (United States); Raju, Mandhapati P. [General Motor R and D Tech Center, Warren, MI 48090 (United States); Sung, Chih-Jen [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269 (United States)

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluid dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)

  14. A diffusion model for two parallel queues with processor sharing: transient behavior and asymptotics

    Directory of Open Access Journals (Sweden)

    Charles Knessl

    1999-01-01

    Full Text Available We consider two identical, parallel M/M/1 queues. Both queues are fed by a Poisson arrival stream of rate λ and have service rates equal to μ. When both queues are non-empty, the two systems behave independently of each other. However, when one of the queues becomes empty, the corresponding server helps in the other queue. This is called head-of-the-line processor sharing. We study this model in the heavy traffic limit, where ρ=λ/μ→1. We formulate the heavy traffic diffusion approximation and explicitly compute the time-dependent probability of the diffusion approximation to the joint queue length process. We then evaluate the solution asymptotically for large values of space and/or time. This leads to simple expressions that show how the process achieves its stead state and other transient aspects.

  15. A preventive maintenance model with a two-level inspection policy based on a three-stage failure process

    International Nuclear Information System (INIS)

    Wang, Wenbin; Zhao, Fei; Peng, Rui

    2014-01-01

    Inspection is always an important preventive maintenance (PM) activity and can have different depths and cover all or part of plant systems. This paper introduces a two-level inspection policy model for a single component plant system based on a three-stage failure process. Such a failure process divides the system′s life into three stages: good, minor defective and severe defective stages. The first level of inspection, the minor inspection, can only identify the minor defective stage with a certain probability, but can always reveal the severe defective stage. The major inspection can however identify both defective stages perfectly. Once the system is found to be in the minor defective stage, a shortened inspection interval is adopted. If however the system is found to be in the severe defective stage, we may delay the maintenance action if the time to the next planned PM window is less than a threshold level, but otherwise, replace immediately. This corresponds to the well adopted maintenance policy in practice such as periodic inspections with planned PMs. A numerical example is presented to demonstrate the proposed model by comparing with other models. - Highlights: • The system′s deterioration goes through a three-stage process, namely, normal, minor defective and severe defective. • Two levels of inspections are proposed, e.g., minor and major inspections. • Once the minor defective stage is found, instead of taking a maintenance action, a shortened inspection interval is recommended. • When the severe defective stage is found, we delay the maintenance according to the threshold to the next PM. • The decision variables are the inspection intervals and the threshold to PM

  16. A novel two-level dynamic parallel data scheme for large 3-D SN calculations

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Shedlock, D.; Haghighat, A.; Yi, C.

    2005-01-01

    We introduce a new dynamic parallel memory optimization scheme for executing large scale 3-D discrete ordinates (Sn) simulations on distributed memory parallel computers. In order for parallel transport codes to be truly scalable, they must use parallel data storage, where only the variables that are locally computed are locally stored. Even with parallel data storage for the angular variables, cumulative storage requirements for large discrete ordinates calculations can be prohibitive. To address this problem, Memory Tuning has been implemented into the PENTRAN 3-D parallel discrete ordinates code as an optimized, two-level ('large' array, 'small' array) parallel data storage scheme. Memory Tuning can be described as the process of parallel data memory optimization. Memory Tuning dynamically minimizes the amount of required parallel data in allocated memory on each processor using a statistical sampling algorithm. This algorithm is based on the integral average and standard deviation of the number of fine meshes contained in each coarse mesh in the global problem. Because PENTRAN only stores the locally computed problem phase space, optimal two-level memory assignments can be unique on each node, depending upon the parallel decomposition used (hybrid combinations of angular, energy, or spatial). As demonstrated in the two large discrete ordinates models presented (a storage cask and an OECD MOX Benchmark), Memory Tuning can save a substantial amount of memory per parallel processor, allowing one to accomplish very large scale Sn computations. (authors)

  17. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  18. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  19. A two-stage value chain model for vegetable marketing chain efficiency evaluation: A transaction cost approach

    OpenAIRE

    Lu Hualiang

    2006-01-01

    We applied a two-stage value chain model to investigate the effects of input application and occasional transaction costs on vegetable marketing chain efficiencies with a farm household-level data set. In the first stage, the production efficiencies with the combination of resource endowments, capital and managerial inputs, and production techniques were evaluated; then at the second stage, the marketing technical efficiencies were determined under the marketing value of the vegetables for th...

  20. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  1. Parallel processing of two-dimensional Sn transport calculations

    International Nuclear Information System (INIS)

    Uematsu, M.

    1997-01-01

    A parallel processing method for the two-dimensional S n transport code DOT3.5 has been developed to achieve a drastic reduction in computation time. In the proposed method, parallelization is achieved with angular domain decomposition and/or space domain decomposition. The calculational speed of parallel processing by angular domain decomposition is largely influenced by frequent communications between processing elements. To assess parallelization efficiency, sample problems with up to 32 x 32 spatial meshes were solved with a Sun workstation using the PVM message-passing library. As a result, parallel calculation using 16 processing elements, for example, was found to be nine times as fast as that with one processing element. As for parallel processing by geometry segmentation, the influence of processing element communications on computation time is small; however, discontinuity at the segment boundary degrades convergence speed. To accelerate the convergence, an alternate sweep of angular flux in conjunction with space domain decomposition and a two-step rescaling method consisting of segmentwise rescaling and ordinary pointwise rescaling have been developed. By applying the developed method, the number of iterations needed to obtain a converged flux solution was reduced by a factor of 2. As a result, parallel calculation using 16 processing elements was found to be 5.98 times as fast as the original DOT3.5 calculation

  2. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  3. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  4. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  5. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu; Kuti, Olawole Abiola; Naser, Nimal; Chung, Suk-Ho; Sarathy, Mani

    2016-01-01

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate

  6. Parallelization of elliptic solver for solving 1D Boussinesq model

    Science.gov (United States)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  7. Parallelization of a hydrological model using the message passing interface

    Science.gov (United States)

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  8. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  9. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  10. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  11. A Parallel Computational Model for Multichannel Phase Unwrapping Problem

    Science.gov (United States)

    Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo

    2015-05-01

    In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.

  12. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  13. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  14. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    Science.gov (United States)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  15. Modeling two-stage bunch compression with wakefields: Macroscopic properties and microbunching instability

    Directory of Open Access Journals (Sweden)

    R. A. Bosch

    2008-09-01

    Full Text Available In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in the bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.

  16. Two-Stage orders sequencing system for mixed-model assembly

    Science.gov (United States)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  17. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  18. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    Science.gov (United States)

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  19. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  20. Modelling of Two-Stage Anaerobic Treating Wastewater from a Molasses-Based Ethanol Distillery with the IWA Anaerobic Digestion Model No.1

    Directory of Open Access Journals (Sweden)

    Kittikhun Taruyanon

    2010-03-01

    Full Text Available This paper presents the application of ADM1 model to simulate the dynamic behaviour of a two-stage anaerobic treatment process treating the wastewater generated from the ethanol distillery process. The laboratory-scale process comprised an anaerobic continuous stirred tank reactor (CSTR and an upflow anaerobic sludge blanket (UASB connecting in series, was used to treat wastewater from the ethanol distillery process. The CSTR and UASB hydraulic retention times (HRT were 12 and 70 hours, respectively. The model was developed based on ADM1 basic structure and implemented with the simulation software AQUASIM. The simulated results were compared with measured data obtained from using the laboratory-scale two-stage anaerobic treatment process to treat wastewater. The sensitivity analysis identified maximum specific uptake rate (km and half-saturation constant (Ks of acetate degrader and sulfate reducing bacteria as the kinetic parameters which highly affected the process behaviour, which were further estimated. The study concluded that the model could predict the dynamic behaviour of a two-stage anaerobic treatment process treating the ethanol distillery process wastewater with varying strength of influents with reasonable accuracy.

  1. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  2. Development of design technology on thermal-hydraulic performance in tight-lattice rod bundle. 4. Large paralleled simulation by the advanced two-fluid model code

    International Nuclear Information System (INIS)

    Misawa, Takeharu; Yoshida, Hiroyuki; Akimoto, Hajime

    2008-01-01

    In Japan Atomic Energy Agency (JAEA), the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) has been developed. For thermal design of FLWR, it is necessary to develop analytical method to predict boiling transition of FLWR. Japan Atomic Energy Agency (JAEA) has been developing three-dimensional two-fluid model analysis code ACE-3D, which adopts boundary fitted coordinate system to simulate complex shape channel flow. In this paper, as a part of development of ACE-3D to apply to rod bundle analysis, introduction of parallelization to ACE-3D and assessments of ACE-3D are shown. In analysis of large-scale domain such as a rod bundle, even two-fluid model requires large number of computational cost, which exceeds upper limit of memory amount of 1 CPU. Therefore, parallelization was introduced to ACE-3D to divide data amount for analysis of large-scale domain among large number of CPUs, and it is confirmed that analysis of large-scale domain such as a rod bundle can be performed by parallel computation with keeping parallel computation performance even using large number of CPUs. ACE-3D adopts two-phase flow models, some of which are dependent upon channel geometry. Therefore, analyses in the domains, which simulate individual subchannel and 37 rod bundle, are performed, and compared with experiments. It is confirmed that the results obtained by both analyses using ACE-3D show agreement with past experimental result qualitatively. (author)

  3. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  4. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  5. Effects of Parallel Channel Interactions on Two-Phase Flow Split in ...

    African Journals Online (AJOL)

    The tests would aid the development of a realistic transient computer model for tracking the distribution of two-phase flows into the multiple parallel channels of a Nuclear Reactor, during Loss of Coolant Accidents (LOCA), and were performed at the General Electric Nuclear Energy Division Laboratory, California. The test ...

  6. Reliability allocation problem in a series-parallel system

    International Nuclear Information System (INIS)

    Yalaoui, Alice; Chu, Chengbin; Chatelet, Eric

    2005-01-01

    In order to improve system reliability, designers may introduce in a system different technologies in parallel. When each technology is composed of components in series, the configuration belongs to the series-parallel systems. This type of system has not been studied as much as the parallel-series architecture. There exist no methods dedicated to the reliability allocation in series-parallel systems with different technologies. We propose in this paper theoretical and practical results for the allocation problem in a series-parallel system. Two resolution approaches are developed. Firstly, a one stage problem is studied and the results are exploited for the multi-stages problem. A theoretical condition for obtaining the optimal allocation is developed. Since this condition is too restrictive, we secondly propose an alternative approach based on an approximated function and the results of the one-stage study. This second approach is applied to numerical examples

  7. Finite mixture model applied in the analysis of a turbulent bistable flow on two parallel circular cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Paula, A.V. de, E-mail: vagtinski@mecanica.ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Möller, S.V., E-mail: svmoller@ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil)

    2013-11-15

    This paper presents a study of the bistable phenomenon which occurs in the turbulent flow impinging on circular cylinders placed side-by-side. Time series of axial and transversal velocity obtained with the constant temperature hot wire anemometry technique in an aerodynamic channel are used as input data in a finite mixture model, to classify the observed data according to a family of probability density functions. Wavelet transforms are applied to analyze the unsteady turbulent signals. Results of flow visualization show that the flow is predominantly two-dimensional. A double-well energy model is suggested to describe the behavior of the bistable phenomenon in this case. -- Highlights: ► Bistable flow on two parallel cylinders is studied with hot wire anemometry as a first step for the application on the analysis to tube bank flow. ► The method of maximum likelihood estimation is applied to hot wire experimental series to classify the data according to PDF functions in a mixture model approach. ► Results show no evident correlation between the changes of flow modes with time. ► An energy model suggests the presence of more than two flow modes.

  8. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  9. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  10. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  11. Approximation algorithms for the parallel flow shop problem

    NARCIS (Netherlands)

    X. Zhang (Xiandong); S.L. van de Velde (Steef)

    2012-01-01

    textabstractWe consider the NP-hard problem of scheduling n jobs in m two-stage parallel flow shops so as to minimize the makespan. This problem decomposes into two subproblems: assigning the jobs to parallel flow shops; and scheduling the jobs assigned to the same flow shop by use of Johnson's

  12. Parallelized Genetic Identification of the Thermal-Electrochemical Model for Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Liqiang Zhang

    2013-01-01

    Full Text Available The parameters of a well predicted model can be used as health characteristics for Lithium-ion battery. This article reports a parallelized parameter identification of the thermal-electrochemical model, which significantly reduces the time consumption of parameter identification. Since the P2D model has the most predictability, it is chosen for further research and expanded to the thermal-electrochemical model by coupling thermal effect and temperature-dependent parameters. Then Genetic Algorithm is used for parameter identification, but it takes too much time because of the long time simulation of model. For this reason, a computer cluster is built by surplus computing resource in our laboratory based on Parallel Computing Toolbox and Distributed Computing Server in MATLAB. The performance of two parallelized methods, namely Single Program Multiple Data (SPMD and parallel FOR loop (PARFOR, is investigated and then the parallelized GA identification is proposed. With this method, model simulations running parallelly and the parameter identification could be speeded up more than a dozen times, and the identification result is batter than that from serial GA. This conclusion is validated by model parameter identification of a real LiFePO4 battery.

  13. Parallel Machine Scheduling with Batch Delivery to Two Customers

    Directory of Open Access Journals (Sweden)

    Xueling Zhong

    2015-01-01

    Full Text Available In some make-to-order supply chains, the manufacturer needs to process and deliver products for customers at different locations. To coordinate production and distribution operations at the detailed scheduling level, we study a parallel machine scheduling model with batch delivery to two customers by vehicle routing method. In this model, the supply chain consists of a processing facility with m parallel machines and two customers. A set of jobs containing n1 jobs from customer 1 and n2 jobs from customer 2 are first processed in the processing facility and then delivered to the customers directly without intermediate inventory. The problem is to find a joint schedule of production and distribution such that the tradeoff between maximum arrival time of the jobs and total distribution cost is minimized. The distribution cost of a delivery shipment consists of a fixed charge and a variable cost proportional to the total distance of the route taken by the shipment. We provide polynomial time heuristics with worst-case performance analysis for the problem. If m=2 and (n1-b(n2-b<0, we propose a heuristic with worst-case ratio bound of 3/2, where b is the capacity of the delivery shipment. Otherwise, the worst-case ratio bound of the heuristic we propose is 2-2/(m+1.

  14. Two-state ion heating at quasi-parallel shocks

    International Nuclear Information System (INIS)

    Thomsen, M.F.; Gosling, J.T.; Bame, S.J.; Onsager, T.G.; Russell, C.T.

    1990-01-01

    In a previous study of ion heating at quasi-parallel shocks, the authors showed a case in which the ion distributions downstream from the shock alternated between a cooler, denser, core/shoulder type and a hotter, less dense, more Maxwellian type. In this paper they further document the alternating occurrence of two different ion states downstream from several quasi-parallel shocks. Three separate lines of evidence are presented to show that the two states are not related in an evolutionary sense, but rather both are produced alternately at the shock: (1) the asymptotic downstream plasma parameters (density, ion temperature, and flow speed) are intermediate between those characterizing the two different states closer to the shock, suggesting that the asymptotic state is produced by a mixing of the two initial states; (2) examples of apparently interpenetrating (i.e., mixing) distributions can be found during transitions from one state to the other; and (3) examples of both types of distributions can be found at actual crossings of the shock ramp. The alternation between the two different types of ion distribution provides direct observational support for the idea that the dissipative dynamics of at least some quasi-parallel shocks is non-stationary and cyclic in nature, as demonstrated by recent numerical simulations. Typical cycle times between intervals of similar ion heating states are ∼2 upstream ion gyroperiods. Both the simulations and the in situ observations indicate that a process of coherent ion reflection is commonly an important part of the dissipation at quasi-parallel shocks

  15. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  16. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  17. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    Science.gov (United States)

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  18. An experimental study of two-phase flow instability on two parallel channel with low steam quality

    International Nuclear Information System (INIS)

    Jiang Shengyao; Wu shaorong; Bo Jinhai; Yao Meisheng; Han Bing; Zhang Youjie

    1988-01-01

    An experimental result of two-phase flow instability on two parallel channel natural circulation with low steam quality is presented. The comparison of instability in the single channel and that in parallel channel is given. The effect of unequal inlet resistance coefficient and unequal power on the parallel channel instability is described and the behaviour of instability with equal exit steam quality in the two channel is investigated

  19. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  20. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  1. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  2. Application of the DMRG in two dimensions: a parallel tempering algorithm

    Science.gov (United States)

    Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian

    The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).

  3. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  4. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  5. Parallel Computing Characteristics of Two-Phase Thermal-Hydraulics code, CUPID

    International Nuclear Information System (INIS)

    Lee, Jae Ryong; Yoon, Han Young

    2013-01-01

    Parallelized CUPID code has proved to be able to reproduce multi-dimensional thermal hydraulic analysis by validating with various conceptual problems and experimental data. In this paper, the characteristics of the parallelized CUPID code were investigated. Both single- and two phase simulation are taken into account. Since the scalability of a parallel simulation is known to be better for fine mesh system, two types of mesh system are considered. In addition, the dependency of the preconditioner for matrix solver was also compared. The scalability for the single-phase flow is better than that for two-phase flow due to the less numbers of iterations for solving pressure matrix. The CUPID code was investigated the parallel performance in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the interface cells. As increasing the number of mesh, the scalability is improved. For a given mesh, single-phase flow simulation with diagonal preconditioner shows the best speedup. However, for the two-phase flow simulation, the ILU preconditioner is recommended since it reduces the overall simulation time

  6. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  7. Theoretical investigations on two-phase flow instability in parallel channels under axial non-uniform heating

    International Nuclear Information System (INIS)

    Lu, Xiaodong; Wu, Yingwei; Zhou, Linglan; Tian, Wenxi; Su, Guanghui; Qiu, Suizheng; Zhang, Hong

    2014-01-01

    Highlights: • We developed a model based on homogeneous flow model to analyze two-phase flow instability in parallel channels. • The influence of axial non-uniform heating on the system stability has been investigated. • Influences of various factors on system instability under cosine heat flux have been studied. • The system under top-peaked heat flux is the most stable system. - Abstract: Two-phase flow instability in parallel channels heated by axial non-uniform heat flux has been theoretically studied in this paper. The system control equations of parallel channels were established based on the homogeneous flow model in two-phase region. Semi-implicit finite-difference scheme and staggered mesh method were used to discretize the equations, and the difference equations were solved by chasing method. Cosine, bottom-peaked and top-peaked heat fluxes were used to study the influence of non-uniform heating on two-phase flow instability of the parallel channels system. The marginal stability boundaries (MSB) of parallel channels and three-dimensional instability spaces (or instability reefs) under different heat flux conditions have been obtained. Compared with axial uniform heating, axial non-uniform heating will affect the system stability. Cosine and bottom-peaked heat fluxes can destabilize the system stability in high inlet subcooling region, while the opposite effect can be found in low inlet subcooling region. However, top-peaked heat flux can enhance the system stability in the whole region. In addition, for cosine heat flux, increasing the system pressure or inlet resistance coefficient can strengthen the system stability, and increasing the heating power will destabilize the system stability. The influence of inlet subcooling number on the system stability is multi-valued under cosine heat flux

  8. Two-stage double-effect ammonia/lithium nitrate absorption cycle

    International Nuclear Information System (INIS)

    Ventas, R.; Lecuona, A.; Vereda, C.; Legrand, M.

    2016-01-01

    Highlights: • A two stage double effect cycle with NH3-LiNO3 is proposed. • The cycle operates at lower pressures than conventional. • Adiabatic absorber offers better performance than the diabatic version. • Evaporator external inlet temperatures higher than −10 °C avoids crystallization. • Maximum COP is 1.25 for driving water inlet temperature of 100 C. - Abstract: The two-stage configuration of a double-effect absorption cycle using ammonia/lithium nitrate as working fluid is studied by means of a thermodynamic model. The maximum pressure of this cycle configuration is the same as the single-effect cycle, up to 15.8 bars, being an advantage over the double-effect conventional configuration with three pressure levels that exhibits much higher maximum pressure. The performance of the cycle and the limitation imposed by crystallization of the working fluid is determined for both adiabatic and diabatic absorber cycles. Both cycles offer similar COP; however the adiabatic variant shows a larger margin against crystallization. This cycle can produce cold for external inlet evaporator temperatures down to −10 °C, but for this limit the crystallization could happen at high inlet generator temperatures. The maximum COP can be 1.25 for an external inlet generator temperature of 100 °C. This cycle shows a better COP than a typical double effect cycle with in-parallel configuration for the range of the moderate temperatures under study and using the same working fluid. Comparisons with double effect cycles using H_2O/LiBr and NH_3/H_2O as working fluids are also offered, highlighting the present configurations advantages regarding COP, evaporation and condensation temperatures as well as crystallization.

  9. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  10. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  11. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  12. The Potsdam Parallel Ice Sheet Model (PISM-PIK) - Part 1: Model description

    Science.gov (United States)

    Winkelmann, R.; Martin, M. A.; Haseloff, M.; Albrecht, T.; Bueler, E.; Khroulev, C.; Levermann, A.

    2011-09-01

    We present the Potsdam Parallel Ice Sheet Model (PISM-PIK), developed at the Potsdam Institute for Climate Impact Research to be used for simulations of large-scale ice sheet-shelf systems. It is derived from the Parallel Ice Sheet Model (Bueler and Brown, 2009). Velocities are calculated by superposition of two shallow stress balance approximations within the entire ice covered region: the shallow ice approximation (SIA) is dominant in grounded regions and accounts for shear deformation parallel to the geoid. The plug-flow type shallow shelf approximation (SSA) dominates the velocity field in ice shelf regions and serves as a basal sliding velocity in grounded regions. Ice streams can be identified diagnostically as regions with a significant contribution of membrane stresses to the local momentum balance. All lateral boundaries in PISM-PIK are free to evolve, including the grounding line and ice fronts. Ice shelf margins in particular are modeled using Neumann boundary conditions for the SSA equations, reflecting a hydrostatic stress imbalance along the vertical calving face. The ice front position is modeled using a subgrid-scale representation of calving front motion (Albrecht et al., 2011) and a physically-motivated calving law based on horizontal spreading rates. The model is tested in experiments from the Marine Ice Sheet Model Intercomparison Project (MISMIP). A dynamic equilibrium simulation of Antarctica under present-day conditions is presented in Martin et al. (2011).

  13. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu

    2016-08-10

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate the ignition delay of n-dodecane and n-heptane in a constant volume combustion chamber (CVCC), which is archived in the engine combustion network (ECN) library (http://www.ca.sandia.gov/ecn). A detailed chemical kinetic model for gasoline surrogates from the Lawrence Livermore National Laboratory (LLNL) was utilized for the simulation of n-heptane and isooctane. Additional simulations were performed using an optimized gasoline surrogate mechanism from RWTH Aachen University. Validations of the simulated data were also performed with experimental results from an IQT at KAUST. For simulation of n-dodecane in the CVCC, two n-dodecane kinetic models from the literature were utilized. The primary aim of this study is to test the ability of TSL to replicate ignition timings in the IQT and the CVCC. The agreement between the model and the experiment is acceptable except for isooctane in the IQT and n-heptane and n-dodecane in the CVCC. The ability of the simulations to replicate observable trends in ignition delay times with regard to changes in ambient temperature and pressure allows the model to provide insights into the reactions contributing towards ignition. Thus, the TSL model was further employed to investigate the physical and chemical processes responsible for controlling the overall ignition under various conditions. The effects of exothermicity, ambient pressure, and ambient oxygen concentration on first stage ignition were also studied. Increasing ambient pressure and oxygen concentration was found to shorten the overall ignition delay time, but does not affect the timing of the first stage ignition. Additionally, the temperature at the end of the first stage ignition was found to increase at higher ambient pressure

  14. A comparison of two-component and quadratic models to assess survival of irradiated stage-7 oocytes of Drosophila melanogaster

    International Nuclear Information System (INIS)

    Peres, C.A.; Koo, J.O.

    1981-01-01

    In this paper, the quadratic model to analyse data of this kind, i.e. S/S 0 = exp(-αD-bD 2 ), where S and Ssub(o) are defined as before is proposed is shown that the same biological interpretation can be given to the parameters α and A and to the parameters β and B. Furthermore it is shown that the quadratic model involves one probabilistic stage more than the two-component model, and therefore the quadratic model would perhaps be more appropriate as a dose-response model for survival of irradiated stage-7 oocytes of Drosophila melanogaster. In order to apply these results, the data presented by Sankaranarayanan and Sankaranarayanan and Volkers are reanalysed using the quadratic model. It is shown that the quadratic model fits better than the two-component model to the data in most situations. (orig./AJ)

  15. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  16. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    Science.gov (United States)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  17. Co-simulation of dynamic systems in parallel and serial model configurations

    International Nuclear Information System (INIS)

    Sweafford, Trevor; Yoon, Hwan Sik

    2013-01-01

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  18. An Enhanced Droop Control Scheme for Resilient Active Power Sharing in Paralleled Two-Stage PV Inverter Systems

    DEFF Research Database (Denmark)

    Liu, Hongpeng; Yang, Yongheng; Wang, Xiongfei

    2016-01-01

    Traditional droop-controlled systems assume that the generators are able to provide sufficient power as required. This is however not always true, especially in renewable systems, where the energy sources (e.g., photovoltaic source) may not be able to provide enough power (or even loss of power...... generation) due to the intermittency. In that case, unbalance in active power generation may occur among the paralleled systems. Additionally, most droop-controlled systems have been assumed to be a single dc-ac inverter with a fixed dc input source. The dc-dc converter as the front-end of a two...

  19. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  20. Turbine stage model

    International Nuclear Information System (INIS)

    Kazantsev, A.A.

    2009-01-01

    A model of turbine stage for calculations of NPP turbine department dynamics in real time was developed. The simulation results were compared with manufacturer calculations for NPP low-speed and fast turbines. The comparison results have shown that the model is valid for real time simulation of all modes of turbines operation. The model allows calculating turbine stage parameters with 1% accuracy. It was shown that the developed turbine stage model meets the accuracy requirements if the data of turbine blades setting angles for all turbine stages are available [ru

  1. Parallel Algorithm for Solving TOV Equations for Sequence of Cold and Dense Nuclear Matter Models

    Science.gov (United States)

    Ayriyan, Alexander; Buša, Ján; Grigorian, Hovik; Poghosyan, Gevorg

    2018-04-01

    We have introduced parallel algorithm simulation of neutron star configurations for set of equation of state models. The performance of the parallel algorithm has been investigated for testing set of EoS models on two computational systems. It scales when using with MPI on modern CPUs and this investigation allowed us also to compare two different types of computational nodes.

  2. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  3. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  4. Mathematical modeling of a continuous alcoholic fermentation process in a two-stage tower reactor cascade with flocculating yeast recycle.

    Science.gov (United States)

    de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo

    2015-03-01

    Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

  5. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  6. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  7. Dynamic Modeling and Control Studies of a Two-Stage Bubbling Fluidized Bed Adsorber-Reactor for Solid-Sorbent CO{sub 2} Capture

    Energy Technology Data Exchange (ETDEWEB)

    Modekurti, Srinivasarao; Bhattacharyya, Debangsu; Zitney, Stephen E.

    2013-07-31

    A one-dimensional, non-isothermal, pressure-driven dynamic model has been developed for a two-stage bubbling fluidized bed (BFB) adsorber-reactor for solid-sorbent carbon dioxide (CO{sub 2}) capture using Aspen Custom Modeler® (ACM). The BFB model for the flow of gas through a continuous phase of downward moving solids considers three regions: emulsion, bubble, and cloud-wake. Both the upper and lower reactor stages are of overflow-type configuration, i.e., the solids leave from the top of each stage. In addition, dynamic models have been developed for the downcomer that transfers solids between the stages and the exit hopper that removes solids from the bottom of the bed. The models of all auxiliary equipment such as valves and gas distributor have been integrated with the main model of the two-stage adsorber reactor. Using the developed dynamic model, the transient responses of various process variables such as CO{sub 2} capture rate and flue gas outlet temperatures have been studied by simulating typical disturbances such as change in the temperature, flowrate, and composition of the incoming flue gas from pulverized coal-fired power plants. In control studies, the performance of a proportional-integral-derivative (PID) controller, feedback-augmented feedforward controller, and linear model predictive controller (LMPC) are evaluated for maintaining the overall CO{sub 2} capture rate at a desired level in the face of typical disturbances.

  8. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  9. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    Science.gov (United States)

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  10. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  11. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  12. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  13. Non-ideal magnetohydrodynamic simulations of the two-stage fragmentation model for cluster formation

    International Nuclear Information System (INIS)

    Bailey, Nicole D.; Basu, Shantanu

    2014-01-01

    We model molecular cloud fragmentation with thin-disk, non-ideal magnetohydrodynamic simulations that include ambipolar diffusion and partial ionization that transitions from primarily ultraviolet-dominated to cosmic-ray-dominated regimes. These simulations are used to determine the conditions required for star clusters to form through a two-stage fragmentation scenario. Recent linear analyses have shown that the fragmentation length scales and timescales can undergo a dramatic drop across the column density boundary that separates the ultraviolet- and cosmic-ray-dominated ionization regimes. As found in earlier studies, the absence of an ionization drop and regular perturbations leads to a single-stage fragmentation on pc scales in transcritical clouds, so that the nonlinear evolution yields the same fragment sizes as predicted by linear theory. However, we find that a combination of initial transcritical mass-to-flux ratio, evolution through a column density regime in which the ionization drop takes place, and regular small perturbations to the mass-to-flux ratio is sufficient to cause a second stage of fragmentation during the nonlinear evolution. Cores of size ∼0.1 pc are formed within an initial fragment of ∼pc size. Regular perturbations to the mass-to-flux ratio also accelerate the onset of runaway collapse.

  14. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  15. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  16. Investigation of Power Losses of Two-Stage Two-Phase Converter with Two-Phase Motor

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available The paper deals with determination of losses of two-stage power electronic system with two-phase variable orthogonal output. The simulation is focused on the investigation of losses in the converter during one period in steady-state operation. Modeling and simulation of two matrix converters with R-L load is shown in the paper. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and in application with high frequency voltage sources.

  17. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  18. The Potsdam Parallel Ice Sheet Model (PISM-PIK – Part 1: Model description

    Directory of Open Access Journals (Sweden)

    R. Winkelmann

    2011-09-01

    Full Text Available We present the Potsdam Parallel Ice Sheet Model (PISM-PIK, developed at the Potsdam Institute for Climate Impact Research to be used for simulations of large-scale ice sheet-shelf systems. It is derived from the Parallel Ice Sheet Model (Bueler and Brown, 2009. Velocities are calculated by superposition of two shallow stress balance approximations within the entire ice covered region: the shallow ice approximation (SIA is dominant in grounded regions and accounts for shear deformation parallel to the geoid. The plug-flow type shallow shelf approximation (SSA dominates the velocity field in ice shelf regions and serves as a basal sliding velocity in grounded regions. Ice streams can be identified diagnostically as regions with a significant contribution of membrane stresses to the local momentum balance. All lateral boundaries in PISM-PIK are free to evolve, including the grounding line and ice fronts. Ice shelf margins in particular are modeled using Neumann boundary conditions for the SSA equations, reflecting a hydrostatic stress imbalance along the vertical calving face. The ice front position is modeled using a subgrid-scale representation of calving front motion (Albrecht et al., 2011 and a physically-motivated calving law based on horizontal spreading rates. The model is tested in experiments from the Marine Ice Sheet Model Intercomparison Project (MISMIP. A dynamic equilibrium simulation of Antarctica under present-day conditions is presented in Martin et al. (2011.

  19. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  20. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  1. A two-stage constitutive model of X12CrMoWVNbN10-1-1 steel during elevated temperature

    Science.gov (United States)

    Zhu, Luobei; He, Jianli; Zhang, Ying

    2018-02-01

    In order to clarify the competition between work hardening (WH) caused by dislocation movements and the dynamic softening result from dynamic recovery (DRV) and dynamic recrystallization (DRX), a new two-stage flow stress model of X12CrMoWVNbN10-1-1 (X12) ferrite heat-resistant steel was established to describe the whole hot deformation behavior. And the parameters were determined by the experimental data operated on a Gleeble-3800 thermo- mechanical simulation. In this constitutive model, a single internal variable dislocation density evolution model is used to describe the influence of WH and DRV to flow stress. The DRX kinetic dynamic model can express accurately the contribution of DRX to the decline of flow stress, which was established on the Avrami equation. Furthermore, The established new model was compared with Fields-Bachofen (F-B) model and experimental data. The results indicate the new two-stage flow stress model can more accurately represent the hot deformation behavior of X12 ferrite heat-resistant steel, and the average error is only 0.0995.

  2. Parallel and distributed processing in two SGBDS: A case study

    Directory of Open Access Journals (Sweden)

    Francisco Javier Moreno

    2017-04-01

    Full Text Available Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS, such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. Results: The tests of the SQL query evidenced the performance differences between the two DBMS analyzed. However, more thorough testing and a wider variety of queries are needed. Conclusions: The differences in performance between the two DBMSs analyzed show that when evaluating this aspect, it is necessary to consider the particularities of each DBMS and the degree of parallelism of the queries.

  3. Comorbidity, age, race and stage at diagnosis in colorectal cancer: a retrospective, parallel analysis of two health systems

    Directory of Open Access Journals (Sweden)

    Rowe Krista L

    2008-11-01

    Full Text Available Abstract Background Stage at diagnosis plays a significant role in colorectal cancer (CRC survival. Understanding which factors contribute to a more advanced stage at diagnosis is vital to improving overall survival. Comorbidity, race, and age are known to impact receipt of cancer therapy and survival, but the relationship of these factors to stage at diagnosis of CRC is less clear. The objective of this study is to investigate how comorbidity, race and age influence stage of CRC diagnosis. Methods Two distinct healthcare populations in the United States (US were retrospectively studied. Using the Cancer Care Outcomes Research and Surveillance Consortium database, we identified CRC patients treated at 15 Veterans Administration (VA hospitals from 2003–2007. We assessed metastatic CRC patients treated from 2003–2006 at 10 non-VA, fee-for-service (FFS practices. Stage at diagnosis was dichotomized (non-metastatic, metastatic. Race was dichotomized (white, non-white. Charlson comorbidity index and age at diagnosis were calculated. Associations between stage, comorbidity, race, and age were determined by logistic regression. Results 342 VA and 340 FFS patients were included. Populations differed by the proportion of patients with metastatic CRC at diagnosis (VA 27% and FFS 77% reflecting differences in eligibility criteria for inclusion. VA patients were mean (standard deviation; SD age 67 (11, Charlson index 2.0 (1.0, and were 63% white. FFS patients were mean age 61 (13, Charlson index 1.6 (1.0, and were 73% white. In the VA cohort, higher comorbidity was associated with earlier stage at diagnosis after adjusting for age and race (odds ratio (OR 0.76, 95% confidence interval (CI 0.58–1.00; p = 0.045; no such significant relationship was identified in the FFS cohort (OR 1.09, 95% CI 0.82–1.44; p = 0.57. In both cohorts, no association was found between stage at diagnosis and either age or race. Conclusion Higher comorbidity may lead to

  4. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  5. Implications of the two stage clonal expansion model to radiation risk estimation

    International Nuclear Information System (INIS)

    Curtis, S.B.; Hazelton, W.D.; Luebeck, E.G.; Moolgavkar, S.H.

    2003-01-01

    The Two Stage Clonal Expansion Model of carcinogenesis has been applied to the analysis of several cohorts of persons exposed to chronic exposures of high and low LET radiation. The results of these analyses are: (1) the importance of radiation-induced initiation is small and, if present at all, contributes to cancers only late in life and only if exposure begins early in life, (2) radiation-induced promotion dominates and produces the majority of cancers by accelerating proliferation of already-initiated cells, and (3) radiation-induced malignant conversion is important only during and immediately after exposure ceases and tends to dominate only late in life, acting on already initiated and promoted cells. Two populations, the Colorado Plateau miners (high-LET, radon exposed) and the Canadian radiation workers (low-LET, gamma ray exposed) are used as examples to show the time dependence of the hazard function and the relative importance of the three hypothesized processes (initiation, promotion and malignant conversion) for each radiation quality

  6. Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex

    Science.gov (United States)

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2014-01-01

    Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314

  7. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  8. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  9. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  10. Linear predictions of supercritical flow instability in two parallel channels

    International Nuclear Information System (INIS)

    Shah, M.

    2008-01-01

    A steady state linear code that can predict thermo-hydraulic instability boundaries in a two parallel channel system under supercritical conditions has been developed. Linear and non-linear solutions of the instability boundary in a two parallel channel system are also compared. The effect of gravity on the instability boundary in a two parallel channel system, by changing the orientation of the system flow from horizontal flow to vertical up-flow and vertical down-flow has been analyzed. Vertical up-flow is found to be more unstable than horizontal flow and vertical down flow is found to be the most unstable configuration. The type of instability present in each flow-orientation of a parallel channel system has been checked and the density wave oscillation type is observed in horizontal flow and vertical up-flow, while the static type of instability is observed in a vertical down-flow for the cases studied here. The parameters affecting the instability boundary, such as the heating power, inlet temperature, inlet and outlet K-factors are varied to assess their effects. This study is important for the design of future Generation IV nuclear reactors in which supercritical light water is proposed as the primary coolant. (author)

  11. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  12. A parallel implementation of 3-d CT image reconstruction on a hypercube multiprocessor

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.; Cho, Z.H.

    1990-01-01

    In this paper, the authors describe how image reconstruction in computerized tomography (CT) can be parallelized on a message-passing multiprocessor. In particular, the results obtained from parallel implementation of 3-D CT image reconstruction for parallel beam geometries on the Intel hypercube, iPSC/2, are presented. A two stage pipelining approach is employed for filtering (convolution) and backprojection. The conventional sequential convolution algorithm is modified such that the symmetry of the filter kernel is fully utilized for parallelization. In the backprojection stage, the 3-D incremental algorithm, the authors' recently developed backprojection scheme which is shown to be faster than conventional algorithm, is parallelized

  13. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  14. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  15. Linear and nonlinear excitations in two stacks of parallel arrays of long Josephson junctions

    DEFF Research Database (Denmark)

    Carapella, G.; Constabile, Giovanni; Latempa, R.

    2000-01-01

    We investigate a structure consisting of two parallel arrays of long Josephson junctions sharing a common electrode that allows inductive coupling between the arrays. A model for this structure is derived starting from the description of its continuous limit. The excitation of linear cavity modes...... known from continuous and discrete systems as well as the excitation of a new state exhibiting synchronization in two dimensions are inferred from the mathematical model of the system. The stable nonlinear solution of the coupled sine-Gordon equations describing the system is found to consist...

  16. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  17. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  18. Short term load forecasting: two stage modelling

    Directory of Open Access Journals (Sweden)

    SOARES, L. J.

    2009-06-01

    Full Text Available This paper studies the hourly electricity load demand in the area covered by a utility situated in the Seattle, USA, called Puget Sound Power and Light Company. Our proposal is put into proof with the famous dataset from this company. We propose a stochastic model which employs ANN (Artificial Neural Networks to model short-run dynamics and the dependence among adjacent hours. The model proposed treats each hour's load separately as individual single series. This approach avoids modeling the intricate intra-day pattern (load profile displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is evaluated in similiar mode a TLSAR (Two-Level Seasonal Autoregressive model proposed by Soares (2003 using the years of 1995 and 1996 as the holdout sample. Moreover, we conclude that non linearity is present in some series of these data. The model results are analyzed. The experiment shows that our tool can be used to produce load forecasting in tropical climate places.

  19. Possibilities Of Opening Up the Stage-Gate Model

    Directory of Open Access Journals (Sweden)

    Biljana Stošić

    2014-12-01

    Full Text Available The paper presents basic elements of the Stage-Gate and Open innovation models, and possible connection of these two, resulting in what is frequently called an “Open Stage-Gate” model. This connection is based on opening up the new product development process and integration of the open innovation principles with the Stage-Gate concept, facilitating the import and export of information and technologies. Having in mind that the Stage Gate has originally been classified as the third generation model of innovation, the paper is dealing with the capabilities for applying the sixth generation Open innovation principles in today’s improved and much more flexible phases and gates of the Stage Gate. Lots of innovative companies are actually using both models in their NPD practice, looking for the most appropriate means of opening up the well-known closed innovation, especially in the domain of ideation through co-creation.

  20. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  1. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    Directory of Open Access Journals (Sweden)

    Oscar Salas

    2011-07-01

    Full Text Available The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP model and we study a rapid parallelization strategy of the numerical procedure on shared memory architectures.

  2. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  3. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  4. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  5. One Factor or Two Parallel Processes? Comorbidity and Development of Adolescent Anxiety and Depressive Disorder Symptoms

    Science.gov (United States)

    Hale, William W., III; Raaijmakers, Quinten A. W.; Muris, Peter; van Hoof, Anne; Meeus, Wim H. J.

    2009-01-01

    Background: This study investigates whether anxiety and depressive disorder symptoms of adolescents from the general community are best described by a model that assumes they are indicative of one general factor or by a model that assumes they are two distinct disorders with parallel growth processes. Additional analyses were conducted to explore…

  6. Stratified steady and unsteady two-phase flows between two parallel plates

    International Nuclear Information System (INIS)

    Sim, Woo Gun

    2006-01-01

    To understand fluid dynamic forces acting on a structure subjected to two-phase flow, it is essential to get detailed information about the characteristics of two-phase flow. Stratified steady and unsteady two-phase flows between two parallel plates have been studied to investigate the general characteristics of the flow related to flow-induced vibration. Based on the spectral collocation method, a numerical approach has been developed for the unsteady two-phase flow. The method is validated by comparing numerical result to analytical one given for a simple harmonic two-phase flow. The flow parameters for the steady two-phase flow, such as void fraction and two-phase frictional multiplier, are evaluated. The dynamic characteristics of the unsteady two-phase flow, including the void fraction effect on the complex unsteady pressure, are illustrated

  7. Two-stage Bayesian model to evaluate the effect of air pollution on chronic respiratory diseases using drug prescriptions.

    Science.gov (United States)

    Blangiardo, Marta; Finazzi, Francesco; Cameletti, Michela

    2016-08-01

    Exposure to high levels of air pollutant concentration is known to be associated with respiratory problems which can translate into higher morbidity and mortality rates. The link between air pollution and population health has mainly been assessed considering air quality and hospitalisation or mortality data. However, this approach limits the analysis to individuals characterised by severe conditions. In this paper we evaluate the link between air pollution and respiratory diseases using general practice drug prescriptions for chronic respiratory diseases, which allow to draw conclusions based on the general population. We propose a two-stage statistical approach: in the first stage we specify a space-time model to estimate the monthly NO2 concentration integrating several data sources characterised by different spatio-temporal resolution; in the second stage we link the concentration to the β2-agonists prescribed monthly by general practices in England and we model the prescription rates through a small area approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Illicit drug use and abuse/dependence: modeling of two-stage variables using the CCC approach.

    Science.gov (United States)

    Agrawal, A; Neale, M C; Jacobson, K C; Prescott, C A; Kendler, K S

    2005-06-01

    Drug use and abuse/dependence are stages of a complex drug habit. Most genetically informative models that are fit to twin data examine drug use and abuse/dependence independent of each other. This poses an interesting question: for a multistage process, how can we partition the factors influencing each stage specifically from the factors that are common to both stages? We used a causal-common-contingent (CCC) model to partition the common and specific influences on drug use and abuse/dependence. Data on use and abuse/dependence of cannabis, cocaine, sedatives, stimulants and any illicit drug was obtained from male and female twin pairs. CCC models were tested individually for each sex and in a sex-equal model. Our results suggest that there is evidence for additive genetic, shared environmental and unique environmental influences that are common to illicit drug use and abuse/dependence. Furthermore, we found substantial evidence for factors that were specific to abuse/dependence. Finally, sexes could be equated for all illicit drugs. The findings of this study emphasize the need for models that can partition the sources of individual differences into common and stage-specific influences.

  9. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  10. Modelling of an air-cooled two-stage Rankine cycle for electricity production

    International Nuclear Information System (INIS)

    Liu, Bo

    2014-01-01

    This work considers a two stage Rankine cycle architecture slightly different from a standard Rankine cycle for electricity generation. Instead of expanding the steam to extremely low pressure, the vapor leaves the turbine at a higher pressure then having a much smaller specific volume. It is thus possible to greatly reduce the size of the steam turbine. The remaining energy is recovered by a bottoming cycle using a working fluid which has a much higher density than the water steam. Thus, the turbines and heat exchangers are more compact; the turbine exhaust velocity loss is lower. This configuration enables to largely reduce the global size of the steam water turbine and facilitate the use of a dry cooling system. The main advantage of such an air cooled two stage Rankine cycle is the possibility to choose the installation site of a large or medium power plant without the need of a large and constantly available water source; in addition, as compared to water cooled cycles, the risk regarding future operations is reduced (climate conditions may affect water availability or temperature, and imply changes in the water supply regulatory rules). The concept has been investigated by EDF R and D. A 22 MW prototype was developed in the 1970's using ammonia as the working fluid of the bottoming cycle for its high density and high latent heat. However, this fluid is toxic. In order to search more suitable working fluids for the two stage Rankine cycle application and to identify the optimal cycle configuration, we have established a working fluid selection methodology. Some potential candidates have been identified. We have evaluated the performances of the two stage Rankine cycles operating with different working fluids in both design and off design conditions. For the most acceptable working fluids, components of the cycle have been sized. The power plant concept can then be evaluated on a life cycle cost basis. (author)

  11. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  12. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  13. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    Science.gov (United States)

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  14. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  15. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  16. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  17. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  18. Two-stage simplified swarm optimization for the redundancy allocation problem in a multi-state bridge system

    International Nuclear Information System (INIS)

    Lai, Chyh-Ming; Yeh, Wei-Chang

    2016-01-01

    The redundancy allocation problem involves configuring an optimal system structure with high reliability and low cost, either by alternating the elements with more reliable elements and/or by forming them redundantly. The multi-state bridge system is a special redundancy allocation problem and is commonly used in various engineering systems for load balancing and control. Traditional methods for redundancy allocation problem cannot solve multi-state bridge systems efficiently because it is impossible to transfer and reduce a multi-state bridge system to series and parallel combinations. Hence, a swarm-based approach called two-stage simplified swarm optimization is proposed in this work to effectively and efficiently solve the redundancy allocation problem in a multi-state bridge system. For validating the proposed method, two experiments are implemented. The computational results indicate the advantages of the proposed method in terms of solution quality and computational efficiency. - Highlights: • Propose two-stage SSO (SSO_T_S) to deal with RAP in multi-state bridge system. • Dynamic upper bound enhances the efficiency of searching near-optimal solution. • Vector-update stages reduces the problem dimensions. • Statistical results indicate SSO_T_S is robust both in solution quality and runtime.

  19. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  20. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  1. An application of analyzing the trajectories of two disorders: A parallel piecewise growth model of substance use and attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Mamey, Mary Rose; Barbosa-Leiker, Celestina; McPherson, Sterling; Burns, G Leonard; Parks, Craig; Roll, John

    2015-12-01

    Researchers often want to examine 2 comorbid conditions simultaneously. One strategy to do so is through the use of parallel latent growth curve modeling (LGCM). This statistical technique allows for the simultaneous evaluation of 2 disorders to determine the explanations and predictors of change over time. Additionally, a piecewise model can help identify whether there are more than 2 growth processes within each disorder (e.g., during a clinical trial). A parallel piecewise LGCM was applied to self-reported attention-deficit/hyperactivity disorder (ADHD) and self-reported substance use symptoms in 303 adolescents enrolled in cognitive-behavioral therapy treatment for a substance use disorder and receiving either oral-methylphenidate or placebo for ADHD across 16 weeks. Assessing these 2 disorders concurrently allowed us to determine whether elevated levels of 1 disorder predicted elevated levels or increased risk of the other disorder. First, a piecewise growth model measured ADHD and substance use separately. Next, a parallel piecewise LGCM was used to estimate the regressions across disorders to determine whether higher scores at baseline of the disorders (i.e., ADHD or substance use disorder) predicted rates of change in the related disorder. Finally, treatment was added to the model to predict change. While the analyses revealed no significant relationships across disorders, this study explains and applies a parallel piecewise growth model to examine the developmental processes of comorbid conditions over the course of a clinical trial. Strengths of piecewise and parallel LGCMs for other addictions researchers interested in examining dual processes over time are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  3. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  4. A cooperation model based on CVaR measure for a two-stage supply chain

    Science.gov (United States)

    Xu, Xinsheng; Meng, Zhiqing; Shen, Rui

    2015-07-01

    In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.

  5. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  6. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    Directory of Open Access Journals (Sweden)

    Yong Xia

    2015-01-01

    Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  7. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  8. Parallel thermal radiation transport in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R.P.; Ball, S.R. [AWE Aldermaston (United Kingdom)

    2003-07-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  9. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  10. Measuring effectiveness of a university by a parallel network DEA model

    Science.gov (United States)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2017-11-01

    Universities contribute significantly to the development of human capital and socio-economic improvement of a country. Due to that, Malaysian universities carried out various initiatives to improve their performance. Most studies have used the Data Envelopment Analysis (DEA) model to measure efficiency rather than effectiveness, even though, the measurement of effectiveness is important to realize how effective a university in achieving its ultimate goals. A university system has two major functions, namely teaching and research and every function has different resources based on its emphasis. Therefore, a university is actually structured as a parallel production system with its overall effectiveness is the aggregated effectiveness of teaching and research. Hence, this paper is proposing a parallel network DEA model to measure the effectiveness of a university. This model includes internal operations of both teaching and research functions into account in computing the effectiveness of a university system. In literature, the graduate and the number of program offered are defined as the outputs, then, the employed graduates and the numbers of programs accredited from professional bodies are considered as the outcomes for measuring the teaching effectiveness. Amount of grants is regarded as the output of research, while the different quality of publications considered as the outcomes of research. A system is considered effective if only all functions are effective. This model has been tested using a hypothetical set of data consisting of 14 faculties at a public university in Malaysia. The results show that none of the faculties is relatively effective for the overall performance. Three faculties are effective in teaching and two faculties are effective in research. The potential applications of the parallel network DEA model allow the top management of a university to identify weaknesses in any functions in their universities and take rational steps for improvement.

  11. The protection motivation theory within the stages of the transtheoretical model - stage-specific interplay of variables and prediction of exercise stage transitions.

    Science.gov (United States)

    Lippke, Sonia; Plotnikoff, Ronald C

    2009-05-01

    Two different theories of health behaviour have been chosen with the aim of theory integration: a continuous theory (protection motivation theory, PMT) and a stage model (transtheoretical model, TTM). This is the first study to test whether the stages of the TTM moderate the interrelation of PMT-variables and the mediation of motivation, as well as PMT-variables' interactions in predicting stage transitions. Hypotheses were tested regarding (1) mean patterns, stage pair-comparisons and nonlinear trends using ANOVAs; (2) prediction-patterns for the different stage groups employing multi-group structural equation modelling (MSEM) and nested model analyses; and (3) stage transitions using binary logistic regression analyses. Adults (N=1,602) were assessed over a 6 month period on their physical activity stages, PMT-variables and subsequent behaviour. (1) Particular mean differences and nonlinear trends in all test variables were found. (2) The PMT adequately fitted the five stage groups. The MSEM revealed that covariances within threat appraisal and coping appraisal were invariant and all other constrains were stage-specific, i.e. stage was a moderator. Except for self-efficacy, motivation fully mediated the relationship between the social-cognitive variables and behaviour. (3) Predicting stage transitions with the PMT-variables underscored the importance of self-efficacy. Only when threat appraisal and coping appraisal were high, stage movement was more likely in the preparation stage. Results emphasize stage-specific differences of the PMT mechanisms, and hence, support the stage construct. The findings may guide further theory building and research integrating different theoretical approaches.

  12. Improvements in fast-response flood modeling: desktop parallel computing and domain tracking

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV. OF UTAH

    2009-01-01

    It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al

  13. Domain decomposition parallel computing for transient two-phase flow of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.

  14. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  15. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  16. Construction of a digital elevation model: methods and parallelization

    International Nuclear Information System (INIS)

    Mazzoni, Christophe

    1995-01-01

    The aim of this work is to reduce the computation time needed to produce the Digital Elevation Models (DEM) by using a parallel machine. It is made in collaboration between the French 'Institut Geographique National' (IGN) and the Laboratoire d'Electronique de Technologie et d'Instrumentation (LETI) of the French Atomic Energy Commission (CEA). The IGN has developed a system which provides DEM that is used to produce topographic maps. The kernel of this system is the correlator, a software which automatically matches pairs of homologous points of a stereo-pair of photographs. Nevertheless the correlator is expensive In computing time. In order to reduce computation time and to produce the DEM with same accuracy that the actual system, we have parallelized the IGN's correlator on the OPENVISION system. This hardware solution uses a SIMD (Single Instruction Multiple Data) parallel machine SYMPATI-2, developed by the LETI that is involved in parallel architecture and image processing. Our analysis of the implementation has demonstrated the difficulty of efficient coupling between scalar and parallel structure. So we propose solutions to reinforce this coupling. In order to accelerate more the processing we evaluate SYMPHONIE, a SIMD calculator, successor of SYMPATI-2. On an other hand, we developed a multi-agent approach for what a MIMD (Multiple Instruction, Multiple Data) architecture is available. At last, we describe a Multi-SIMD architecture that conciliates our two approaches. This architecture offers a capacity to apprehend efficiently multi-level treatment image. It is flexible by its modularity, and its communication network supplies reliability that interest sensible systems. (author) [fr

  17. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  18. Parallel and distributed processing in two SGBDS: A case study

    OpenAIRE

    Francisco Javier Moreno; Nataly Castrillón Charari; Camilo Taborda Zuluaga

    2017-01-01

    Context: One of the strategies for managing large volumes of data is distributed and parallel computing. Among the tools that allow applying these characteristics are some Data Base Management Systems (DBMS), such as Oracle, DB2, and SQL Server. Method: In this paper we present a case study where we evaluate the performance of an SQL query in two of these DBMS. The evaluation is done through various forms of data distribution in a computer network with different degrees of parallelism. ...

  19. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  20. Stage wise modeling of liquid-extraction column (RDC)

    International Nuclear Information System (INIS)

    Bastani, B.

    2004-01-01

    Stage wise forward mixing model considering coalescence and re dispersion of drops was used to predict the performance of Rotating Disk Liquid Extraction Contactors. Experimental data previously obtained in two RDC columns of 7-62 cm diameter, 73.6 cm height and 21.9 cm diameter,150 cm height were used to evaluate the model predications. Drops-side mass transfer coefficients were predicted applying Hand los-baron drop model and onley's model was used to predict velocities. According to the results obtained the followings could be concluded: (1) If the height of coalescence and re dispersion i.e.:h=h p Q p / Q could be estimated, the stage wise forward mixing with coalescence and re dispersion model will predict the column height and efficiency with the acceptable accuracy, (2) The stage wise modeling predictions are highly dependent on the number of stages used when the number of stages is less than 10 and (3) Application of continuous phase mass transfer axial dispersion coefficients (k c and E c ) obtained from the solute concentration profile along the column height will predict the column performance more accurately than the Calder bank and moo-young (for K c ) and Kumar-Heartland (for E c ) correlations

  1. Evaluation of a modified two-stage inferior alveolar nerve block technique: A preliminary investigation

    Directory of Open Access Journals (Sweden)

    Ashwin Rao

    2017-01-01

    Full Text Available Introduction: The two-stage technique of inferior alveolar nerve block (IANB administration does not address the pain associated with “needle insertion” and “local anesthetic solution deposition” in the “first stage” of the injection. This study evaluated a “modified two stage technique” to the reaction of children during “needle insertion” and “local anesthetic solution deposition” during the “first stage” and compared it to the “first phase” of the IANB administered with the standard one-stage technique. Materials and Methods: This was a parallel, single-blinded comparative study. A total of 34 children (between 6 and 10 years of age were randomly divided into two groups to receive an IANB either through the modified two-stage technique (MTST (Group A; 15 children or the standard one-stage technique (SOST (Group B; 19 children. The evaluation was done using the Face Legs Activity Cry Consolability (FLACC; which is an objective scale based on the expressions of the child scale. The obtained data was analyzed using Fishers Exact test with the P value set at <0.05 as level of significance. Results: 73.7% of children in Group B indicated moderate pain during the “first phase” of SOST and no children indicated such in the “first stage” of group A. Group A had 33.3% children who scored “0” indicating relaxed/comfortable children compared to 0% in Group B. In Group A, 66.7% of children scored between 1–3 indicating mild discomfort compared to 26.3% in group B. The difference in the scores between the two groups in each category (relaxed/comfortable, mild discomfort, moderate pain was highly significant (P < 0.001. Conclusion: Reaction of children in Group A during “needle insertion” and “local anesthetic solution deposition” in the “first stage” of MTST was significantly lower than that of Group B during the “first phase” of the SOST.

  2. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...

  3. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  4. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  5. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  6. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  7. System-Enforced Deterministic Streaming for Efficient Pipeline Parallelism

    Institute of Scientific and Technical Information of China (English)

    张昱; 李兆鹏; 曹慧芳

    2015-01-01

    Pipeline parallelism is a popular parallel programming pattern for emerging applications. However, program-ming pipelines directly on conventional multithreaded shared memory is difficult and error-prone. We present DStream, a C library that provides high-level abstractions of deterministic threads and streams for simply representing pipeline stage work-ers and their communications. The deterministic stream is established atop our proposed single-producer/multi-consumer (SPMC) virtual memory, which integrates synchronization with the virtual memory model to enforce determinism on shared memory accesses. We investigate various strategies on how to efficiently implement DStream atop the SPMC memory, so that an infinite sequence of data items can be asynchronously published (fixed) and asynchronously consumed in order among adjacent stage workers. We have successfully transformed two representative pipeline applications – ferret and dedup using DStream, and conclude conversion rules. An empirical evaluation shows that the converted ferret performed on par with its Pthreads and TBB counterparts in term of running time, while the converted dedup is close to 2.56X, 7.05X faster than the Pthreads counterpart and 1.06X, 3.9X faster than the TBB counterpart on 16 and 32 CPUs, respectively.

  8. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  9. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  10. Heating limits of boiling downward two-phase flow in parallel channels

    International Nuclear Information System (INIS)

    Fukuda, Kenji; Kondoh, Tetsuya; Hasegawa, Shu; Sakai, Takaaki.

    1989-01-01

    Flow characteristics and heating limits of downward two-phase flow in single or parallel multi-channels are investigated experimentally and analytically. The heating section used is made of glass tube, in which the heater tube is inserted, and the flow regime inside it is observed. In single channel experiments with low flow rate conditions, it is found that, initially, gas phase which flows upward against the downward liquid phase flow condenses and diminishes as it flows up being cooled by inflowing liquid. However, as the heating power is increased, some portion of the gas phase reaches the top and accumulates to form an liquid level, which eventually causes the dryout. On the other hand, for high flow rate condition, the flooding at the bottom of the heated section is the cause of the dryout. In parallel multi-channels experiments, reversed (upward) flow which leads to the dryout is observed in some of these channels for low flow rate conditions, while the situation is the same to the single channel case for high flow rate conditions. Analyses are carried out to predict the onset of dryout in single channel using the drift flux model as well as the Wallis' flooding correlation. Above-mentioned two types of the dryout and their boundary are predicted which agree well with the experimental results. (author)

  11. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  12. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    Science.gov (United States)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  13. Type Synthesis of Parallel Mechanisms with the First Class GF Sets and Two-Dimensional Rotations

    Directory of Open Access Journals (Sweden)

    Jialun Yang

    2012-09-01

    Full Text Available The novel design of parallel mechanisms plays a key role in the potential application of parallel mechanisms. In this paper, the type synthesis of parallel mechanisms with the first class GF sets and two-dimensional rotations is studied. The rule of two-dimensional rotations is given, which lays the theoretical foundation for the intersection operations of specific GF sets. Next, kinematic limbs with specific characteristics are designed according to the 2-D and 3-D axes movement theorems. Finally, several synthesized parallel mechanisms with the first class GF sets and two-dimensional rotations are illustrated to show the effectiveness of the proposed methodology.

  14. Improved Path Loss Simulation Incorporating Three-Dimensional Terrain Model Using Parallel Coprocessors

    Directory of Open Access Journals (Sweden)

    Zhang Bin Loo

    2017-01-01

    Full Text Available Current network simulators abstract out wireless propagation models due to the high computation requirements for realistic modeling. As such, there is still a large gap between the results obtained from simulators and real world scenario. In this paper, we present a framework for improved path loss simulation built on top of an existing network simulation software, NS-3. Different from the conventional disk model, the proposed simulation also considers the diffraction loss computed using Epstein and Peterson’s model through the use of actual terrain elevation data to give an accurate estimate of path loss between a transmitter and a receiver. The drawback of high computation requirements is relaxed by offloading the computationally intensive components onto an inexpensive off-the-shelf parallel coprocessor, which is a NVIDIA GPU. Experiments are performed using actual terrain elevation data provided from United States Geological Survey. As compared to the conventional CPU architecture, the experimental result shows that a speedup of 20x to 42x is achieved by exploiting the parallel processing of GPU to compute the path loss between two nodes using terrain elevation data. The result shows that the path losses between two nodes are greatly affected by the terrain profile between these two nodes. Besides this, the result also suggests that the common strategy to place the transmitter in the highest position may not always work.

  15. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  16. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  17. A low-voltage Op Amp with rail-to-rail constant-gm input stage and a class AB rail-to-rail output stage

    NARCIS (Netherlands)

    Botma, J.H.; Wassenaar, R.F.; Wiegerink, Remco J.

    1993-01-01

    In this paper a low-voltage two-stage Op Amp is presented. The Op Amp features rail-to-rail operation and has an @put stage with a constant transconductance (%) over the entire common-mode input range. The input stage consists of an n- and a PMOS differential pair connected in parallel. The constant

  18. 3D printed soft parallel actuator

    Science.gov (United States)

    Zolfagharian, Ali; Kouzani, Abbas Z.; Khoo, Sui Yang; Noshadi, Amin; Kaynak, Akif

    2018-04-01

    This paper presents a 3-dimensional (3D) printed soft parallel contactless actuator for the first time. The actuator involves an electro-responsive parallel mechanism made of two segments namely active chain and passive chain both 3D printed. The active chain is attached to the ground from one end and constitutes two actuator links made of responsive hydrogel. The passive chain, on the other hand, is attached to the active chain from one end and consists of two rigid links made of polymer. The actuator links are printed using an extrusion-based 3D-Bioplotter with polyelectrolyte hydrogel as printer ink. The rigid links are also printed by a 3D fused deposition modelling (FDM) printer with acrylonitrile butadiene styrene (ABS) as print material. The kinematics model of the soft parallel actuator is derived via transformation matrices notations to simulate and determine the workspace of the actuator. The printed soft parallel actuator is then immersed into NaOH solution with specific voltage applied to it via two contactless electrodes. The experimental data is then collected and used to develop a parametric model to estimate the end-effector position and regulate kinematics model in response to specific input voltage over time. It is observed that the electroactive actuator demonstrates expected behaviour according to the simulation of its kinematics model. The use of 3D printing for the fabrication of parallel soft actuators opens a new chapter in manufacturing sophisticated soft actuators with high dexterity and mechanical robustness for biomedical applications such as cell manipulation and drug release.

  19. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  20. A fractional model with parallel fractional Maxwell elements for amorphous thermoplastics

    Science.gov (United States)

    Lei, Dong; Liang, Yingjie; Xiao, Rui

    2018-01-01

    We develop a fractional model to describe the thermomechanical behavior of amorphous thermoplastics. The fractional model is composed of two parallel fractional Maxwell elements. The first fractional Maxwell model is used to describe the glass transition, while the second component is aimed at describing the viscous flow. We further derive the analytical solutions for the stress relaxation modulus and complex modulus through Laplace transform. We then demonstrate the model is able to describe the master curves of the stress relaxation modulus, storage modulus and loss modulus, which all show two distinct transition regions. The obtained parameters show that the modulus of the two fractional Maxwell elements differs in 2-3 orders of magnitude, while the relaxation time differs in 7-9 orders of magnitude. Finally, we apply the model to describe the stress response of constant strain rate tests. The model, together with the parameters obtained from fitting the master curve of stress relaxation modulus, can accurately predict the temperature and strain rate dependent stress response.

  1. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  2. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  3. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  4. A two-level parallel direct search implementation for arbitrarily sized objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, N.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)] [and others

    1994-12-31

    In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p{sub 2}) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

  5. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  6. An experimentally validated simulation model for a four-stage spray dryer

    DEFF Research Database (Denmark)

    Petersen, Lars Norbert; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2017-01-01

    mathematical model is an index-1 differential algebraic equation (DAE) model with 12 states, 9 inputs, 8 disturbances, and 30 parameters. The parameters in the model are identified from well-excited experimental data obtained from the industrialtype spray dryer. The simulated outputs ofthe model are validated...... is divided into four consecutive stages: a primary spray drying stage, two heated fluid bed stages, and a cooling fluid bed stage. Each of these stages in the model is assumed ideally mixed and the dynamics are described by mass- and energy balances. These balance equations are coupled with constitutive...... equations such as a thermodynamic model, the water evaporation rate, the heat transfer rates, and an equation for the stickiness of the powder (glass transition temperature). Laboratory data is used to model the equilibrium moisture content and the glass transition temperature of the powder. The resulting...

  7. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    Science.gov (United States)

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  8. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  9. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  10. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  11. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  12. VALIDATION OF CRACK INTERACTION LIMIT MODEL FOR PARALLEL EDGE CRACKS USING TWO-DIMENSIONAL FINITE ELEMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    R. Daud

    2013-06-01

    Full Text Available Shielding interaction effects of two parallel edge cracks in finite thickness plates subjected to remote tension load is analyzed using a developed finite element analysis program. In the present study, the crack interaction limit is evaluated based on the fitness of service (FFS code, and focus is given to the weak crack interaction region as the crack interval exceeds the length of cracks (b > a. Crack interaction factors are evaluated based on stress intensity factors (SIFs for Mode I SIFs using a displacement extrapolation technique. Parametric studies involved a wide range of crack-to-width (0.05 ≤ a/W ≤ 0.5 and crack interval ratios (b/a > 1. For validation, crack interaction factors are compared with single edge crack SIFs as a state of zero interaction. Within the considered range of parameters, the proposed numerical evaluation used to predict the crack interaction factor reduces the error of existing analytical solution from 1.92% to 0.97% at higher a/W. In reference to FFS codes, the small discrepancy in the prediction of the crack interaction factor validates the reliability of the numerical model to predict crack interaction limits under shielding interaction effects. In conclusion, the numerical model gave a successful prediction in estimating the crack interaction limit, which can be used as a reference for the shielding orientation of other cracks.

  13. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  14. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  15. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  16. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  17. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  18. A simple image-reject mixer based on two parallel phase modulators

    Science.gov (United States)

    Hu, Dapeng; Zhao, Shanghong; Zhu, Zihang; Li, Xuan; Qu, Kun; Lin, Tao; Zhang, Kun

    2018-02-01

    A simple photonic microwave image-reject mixer (IRM) using two parallel phase modulators is proposed. First, a photonic microwave mixer with phase shift ability is achieved using two parallel phase modulators (PMs), an optical bandpass filter, three polarization controllers, three polarization beam splitters and two balanced photodetectors. At the output of the mixer, two frequency downconverted signals with tunable frequency difference can be obtained. By adjusting the phase difference as 90° and utilizing an electrical 90° hybrid, the useless components can be eliminated, and the image reject operation is realized. The key advantage of the proposed scheme is the usage of PM, which avoid the DC bias shifting problem and make the system simple and stable. A simulation is performed to verify the proposed scheme, a relative - 90° or 90° phase shift can be obtained between the two output ports of the photonic microwave mixer, at the output of the IRM, 60 dB image-reject ratio is obtained.

  19. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  20. Device-independent parallel self-testing of two singlets

    Science.gov (United States)

    Wu, Xingyao; Bancal, Jean-Daniel; McKague, Matthew; Scarani, Valerio

    2016-06-01

    Device-independent self-testing offers the possibility of certifying the quantum state and measurements, up to local isometries, using only the statistics observed by querying uncharacterized local devices. In this paper we study parallel self-testing of two maximally entangled pairs of qubits; in particular, the local tensor product structure is not assumed but derived. We prove two criteria that achieve the desired result: a double use of the Clauser-Horne-Shimony-Holt inequality and the 3 ×3 magic square game. This demonstrate that the magic square game can only be perfectly won by measuring a two-singlet state. The tolerance to noise is well within reach of state-of-the-art experiments.

  1. Numerical modelling of the HAB Energy Buoy: Stage 1

    DEFF Research Database (Denmark)

    Kurniawan, Adi

    This report presents the results of the first stage of the project "Numerical modelling of the HAB Energy Buoy". The objectives of this stage are to develop a numerical model of the HAB Energy Buoy, a self-reacting wave energy device consisting of two heaving bodies, and to investigate a number...... and a summary of the main findings is presented. A numerical model of the HAB Energy Buoy has been developed in the frequency domain using two alternative formulations of the equations of motion. The model is capable of predicting the power capture, motion response, and power take-off loads of the device...... configuration are imposed to give a more realistic prediction of the power capture and help ensure a fair comparison. Recommendations with regard to the HAB design are finally suggested....

  2. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  3. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  4. Coupled Model of channels in parallel and neutron kinetics in two dimensions; Modelo acoplado de canales en paralelo y cinetica neutronica en dos dimensiones

    Energy Technology Data Exchange (ETDEWEB)

    Cecenas F, M.; Campos G, R.M. [Instituto de Investigaciones Electricas, Av. Reforma 113, Col. Palmira, 62490 Cuernavaca, Morelos (Mexico)]. E-mail: mcf@iie.org.mx; Valle G, E. del [IPN, ESFM, 07738 Mexico D.F. (Mexico)

    2004-07-01

    In this work an arrangement of thermohydraulic channels is presented that represent those four quadrants of a nucleus of reactor type BWR. The channels are coupled to a model of neutronic in two dimensions that allow to generate the radial profile of power of the reactor. Nevertheless that the neutronic pattern is of two dimensions, it is supplemented with axial additional information when considering the axial profiles of power for each thermo hydraulic channel. The stationary state is obtained the one it imposes as frontier condition the same pressure drop for all the channels. This condition is satisfied to iterating on the flow of coolant in each channel to equal the pressure drop in all the channels. This stationary state is perturbed later on when modifying the values for the effective sections corresponding to an it assembles. The calculation in parallel of the neutronic and the thermo hydraulic is carried out with Vpm (Virtual parallel machine) by means of an outline teacher-slave in a local net of computers. (Author)

  5. A kinetic model for the first stage of pygas upgrading

    Directory of Open Access Journals (Sweden)

    J. L. de Medeiros

    2007-03-01

    Full Text Available Pyrolysis gasoline - PYGAS - is an intermediate boiling product of naphtha steam cracking with a high octane number and high aromatic/unsaturated contents. Due to stabilization concerns, PYGAS must be hydrotreated in two stages. The first stage uses a mild trickle-bed conversion for removing extremely reactive species (styrene, dienes and olefins prior to the more severe second stage where sulfured and remaining olefins are hydrogenated in gas phase. This work addresses the reaction network and two-phase kinetic model for the first stage of PYGAS upgrading. Nonlinear estimation was used for model tuning with kinetic data obtained in bench-scale trickle-bed hydrogenation with a commercial Pd/Al2O3 catalyst. On-line sampling experiments were designed to study the influence of variables - temperature and spatial velocity - on the conversion of styrene, dienes and olefins.

  6. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  7. PPOOLEX experiments with two parallel blowdown pipes

    Energy Technology Data Exchange (ETDEWEB)

    Laine, J.; Puustinen, M.; Raesaenen, A. (Lappeenranta Univ. of Technology, Nuclear Safety Research Unit (Finland))

    2011-01-15

    This report summarizes the results of the experiments with two transparent blowdown pipes carried out with the scaled down PPOOLEX test facility designed and constructed at Lappeenranta University of Technology. Steam was blown into the dry well compartment and from there through either one or two vertical transparent blowdown pipes to the condensation pool. Five experiments with one pipe and six with two parallel pipes were carried out. The main purpose of the experiments was to study loads caused by chugging (rapid condensation) while steam is discharged into the condensation pool filled with sub-cooled water. The PPOOLEX test facility is a closed stainless steel vessel divided into two compartments, dry well and wet well. In the experiments the initial temperature of the condensation pool water varied from 12 deg. C to 55 deg. C, the steam flow rate from 40 g/s to 1 300 g/s and the temperature of incoming steam from 120 deg. C to 185 deg. C. In the experiments with only one transparent blowdown pipe chugging phenomenon didn't occur as intensified as in the preceding experiments carried out with a DN200 stainless steel pipe. With the steel blowdown pipe even 10 times higher pressure pulses were registered inside the pipe. Meanwhile, loads registered in the pool didn't indicate significant differences between the steel and polycarbonate pipe experiments. In the experiments with two transparent blowdown pipes, the steamwater interface moved almost synchronously up and down inside both pipes. Chugging was stronger than in the one pipe experiments and even two times higher loads were measured inside the pipes. The loads at the blowdown pipe outlet were approximately the same as in the one pipe cases. Other registered loads around the pool were about 50-100 % higher than with one pipe. The experiments with two parallel blowdown pipes gave contradictory results compared to the earlier studies dealing with chugging loads in case of multiple pipes. Contributing

  8. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias

    OpenAIRE

    Trippas, Dries; Thompson, Valerie A.; Handley, Simon J.

    2016-01-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was ...

  9. Prognostic meta-signature of breast cancer developed by two-stage mixture modeling of microarray data

    Directory of Open Access Journals (Sweden)

    Ghosh Debashis

    2004-12-01

    Full Text Available Abstract Background An increasing number of studies have profiled tumor specimens using distinct microarray platforms and analysis techniques. With the accumulating amount of microarray data, one of the most intriguing yet challenging tasks is to develop robust statistical models to integrate the findings. Results By applying a two-stage Bayesian mixture modeling strategy, we were able to assimilate and analyze four independent microarray studies to derive an inter-study validated "meta-signature" associated with breast cancer prognosis. Combining multiple studies (n = 305 samples on a common probability scale, we developed a 90-gene meta-signature, which strongly associated with survival in breast cancer patients. Given the set of independent studies using different microarray platforms which included spotted cDNAs, Affymetrix GeneChip, and inkjet oligonucleotides, the individually identified classifiers yielded gene sets predictive of survival in each study cohort. The study-specific gene signatures, however, had minimal overlap with each other, and performed poorly in pairwise cross-validation. The meta-signature, on the other hand, accommodated such heterogeneity and achieved comparable or better prognostic performance when compared with the individual signatures. Further by comparing to a global standardization method, the mixture model based data transformation demonstrated superior properties for data integration and provided solid basis for building classifiers at the second stage. Functional annotation revealed that genes involved in cell cycle and signal transduction activities were over-represented in the meta-signature. Conclusion The mixture modeling approach unifies disparate gene expression data on a common probability scale allowing for robust, inter-study validated prognostic signatures to be obtained. With the emerging utility of microarrays for cancer prognosis, it will be important to establish paradigms to meta

  10. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  11. Comparison of two different modelling tools

    DEFF Research Database (Denmark)

    Brix, Wiebke; Elmegaard, Brian

    2009-01-01

    In this paper a test case is solved using two different modelling tools, Engineering Equation Solver (EES) and WinDali, in order to compare the tools. The system of equations solved, is a static model of an evaporator used for refrigeration. The evaporator consists of two parallel channels......, and it is investigated how a non-uniform airflow influences the refrigerant mass flow rate distribution and the total cooling capacity of the heat exchanger. It is shown that the cooling capacity decreases significantly with increasing maldistribution of the airflow. Comparing the two simulation tools it is found...

  12. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  13. Pthreads vs MPI Parallel Performance of Angular-Domain Decomposed S

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    2000-01-01

    Two programming models for parallelizing the Angular Domain Decomposition (ADD) of the discrete ordinates (S n ) approximation of the neutron transport equation are examined. These are the shared memory model based on the POSIX threads (Pthreads) standard, and the message passing model based on the Message Passing Interface (MPI) standard. These standard libraries are available on most multiprocessor platforms thus making the resulting parallel codes widely portable. The question is: on a fixed platform, and for a particular code solving a given test problem, which of the two programming models delivers better parallel performance? Such comparison is possible on Symmetric Multi-Processors (SMP) architectures in which several CPUs physically share a common memory, and in addition are capable of emulating message passing functionality. Implementation of the two-dimensional,(S n ), Arbitrarily High Order Transport (AHOT) code for solving neutron transport problems using these two parallelization models is described. Measured parallel performance of each model on the COMPAQ AlphaServer 8400 and the SGI Origin 2000 platforms is described, and comparison of the observed speedup for the two programming models is reported. For the case presented in this paper it appears that the MPI implementation scales better than the Pthreads implementation on both platforms

  14. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  15. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  16. A maintenance policy for two-unit parallel systems based on imperfect monitoring information

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, Christophe [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France); Grall, Antoine [Department Genie des Systems Industiels (GSI), Universite de technologie de Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes, Cedex (France)

    2006-02-01

    In this paper a maintenance policy is optimised for a two-unit system with a parallel structure and stochastic dependences. Monitoring problems are taken into account in the optimisation scheme: the failure time of each unit can be not detected with a given probability. Conditions on the system parameters (unit failure rates) and on the non-detection probabilities must be verified to make the optimisation scheme valid. These conditions are clearly identified. Numerical experiments allow to show the relevance of taking into account monitoring problems in the maintenance model.

  17. A maintenance policy for two-unit parallel systems based on imperfect monitoring information

    International Nuclear Information System (INIS)

    Barros, Anne; Berenguer, Christophe; Grall, Antoine

    2006-01-01

    In this paper a maintenance policy is optimised for a two-unit system with a parallel structure and stochastic dependences. Monitoring problems are taken into account in the optimisation scheme: the failure time of each unit can be not detected with a given probability. Conditions on the system parameters (unit failure rates) and on the non-detection probabilities must be verified to make the optimisation scheme valid. These conditions are clearly identified. Numerical experiments allow to show the relevance of taking into account monitoring problems in the maintenance model

  18. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  19. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  20. Description of Adults Seeking Hearing Help for the First Time According to Two Health Behavior Change Approaches: Transtheoretical Model (Stages of Change) and Health Belief Model.

    Science.gov (United States)

    Saunders, Gabrielle H; Frederick, Melissa T; Silverman, ShienPei C; Nielsen, Claus; Laplante-Lévesque, Ariane

    2016-01-01

    Several models of health behavior change are commonly used in health psychology. This study applied the constructs delineated by two models-the transtheoretical model (in which readiness for health behavior change can be described with the stages of precontemplation, contemplation and action) and the health belief model (in which susceptibility, severity, benefits, barriers, self-efficacy, and cues to action are thought to determine likelihood of health behavior change)-to adults seeking hearing help for the first time. One hundred eighty-two participants (mean age: 69.5 years) were recruited following an initial hearing assessment by an audiologist. Participants' mean four-frequency pure-tone average was 35.4 dB HL, with 25.8% having no hearing impairment, 50.5% having a slight impairment, and 23.1% having a moderate or severe impairment using the World Health Organization definition of hearing loss. Participants' hearing-related attitudes and beliefs toward hearing health behaviors were examined using the University of Rhode Island Change Assessment (URICA) and the health beliefs questionnaire (HBQ), which assess the constructs of the transtheoretical model and the health belief model, respectively. Participants also provided demographic information, and completed the hearing handicap inventory (HHI) to assess participation restrictions, and the psychosocial impact of hearing loss (PIHL) to assess the extent to which hearing impacts competence, self-esteem, and adaptability. Degree of hearing impairment was associated with participation restrictions, perceived competence, self-esteem and adaptability, and attitudes and beliefs measured by the URICA and the HBQ. As degree of impairment increased, participation restrictions measured by the HHI, and impacts of hearing loss, as measured by the PIHL, increased. The majority of first-time help seekers in this study were in the action stage of change. Furthermore, relative to individuals with less hearing impairment

  1. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  2. A Two-Stage Algorithm for the Closed-Loop Location-Inventory Problem Model Considering Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available Facility location and inventory control are critical and highly related problems in the design of logistics system for e-commerce. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Focusing on the existing problem in e-commerce logistics system, we formulate a closed-loop location-inventory problem model considering returned merchandise to minimize the total cost which is produced in both forward and reverse logistics networks. To solve this nonlinear mixed programming model, an effective two-stage heuristic algorithm named LRCAC is designed by combining Lagrangian relaxation with ant colony algorithm (AC. Results of numerical examples show that LRCAC outperforms ant colony algorithm (AC on optimal solution and computing stability. The proposed model is able to help managers make the right decisions under e-commerce environment.

  3. Chronic infections in hip arthroplasties: comparing risk of reinfection following one-stage and two-stage revision: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Lange J

    2012-03-01

    Full Text Available Jeppe Lange1,2, Anders Troelsen3, Reimar W Thomsen4, Kjeld Søballe1,51Lundbeck Foundation Centre for Fast-Track Hip and Knee Surgery, Aarhus C, 2Center for Planned Surgery, Silkeborg Regional Hospital, Silkeborg, 3Department of Orthopaedics, Hvidovre Hospital, Hvidovre, 4Department of Clinical Epidemiology, Aarhus University Hospital, Aalborg, 5Department of Orthopaedics, Aarhus University Hospital, Aarhus C, DenmarkBackground: Two-stage revision is regarded by many as the best treatment of chronic infection in hip arthroplasties. Some international reports, however, have advocated one-stage revision. No systematic review or meta-analysis has ever compared the risk of reinfection following one-stage and two-stage revisions for chronic infection in hip arthroplasties.Methods: The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis. Relevant studies were identified using PubMed and Embase. We assessed studies that included patients with a chronic infection of a hip arthroplasty treated with either one-stage or two-stage revision and with available data on occurrence of reinfections. We performed a meta-analysis estimating absolute risk of reinfection using a random-effects model.Results: We identified 36 studies eligible for inclusion. None were randomized controlled trials or comparative studies. The patients in these studies had received either one-stage revision (n = 375 or two-stage revision (n = 929. Reinfection occurred with an estimated absolute risk of 13.1% (95% confidence interval: 10.0%–17.1% in the one-stage cohort and 10.4% (95% confidence interval: 8.5%–12.7% in the two-stage cohort. The methodological quality of most included studies was considered low, with insufficient data to evaluate confounding factors.Conclusions: Our results may indicate three additional reinfections per 100 reimplanted patients when performing a one-stage versus two-stage revision. However, the

  4. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  5. Thermal properties Forsmark. Modelling stage 2.3 Complementary analysis and verification of the thermal bedrock model, stage 2.

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Laendell, Maerta (Geo Innova AB (Sweden)); Back, Paer-Erik; Rosen, Lars (Sweco AB (Sweden))

    2008-11-15

    This report present the results of thermal modelling work for the Forsmark area carried out during modelling stage 2.3. The work complements the main modelling efforts carried out during modelling stage 2.2. A revised spatial statistical description of the rock mass thermal conductivity for rock domain RFM045 is the main result of this work. Thermal modelling of domain RFM045 in Forsmark model stage 2.2 gave lower tail percentiles of thermal conductivity that were considered to be conservatively low due to the way amphibolite, the rock type with the lowest thermal conductivity, was modelled. New and previously available borehole data are used as the basis for revised stochastic geological simulations of domain RFM045. By defining two distinct thermal subdomains, these simulations have succeeded in capturing more of the lithological heterogeneity present. The resulting thermal model for rock domain RFM045 is, therefore, considered to be more realistic and reliable than that presented in model stage 2.2. The main conclusions of modelling efforts in model stage 2.3 are: - Thermal modelling indicates a mean thermal conductivity for domain RFM045 at the 5 m scale of 3.56 W/(mK). This is slightly higher than the value of 3.49 W/(mK) derived in model stage 2.2. - The variance decreases and the lower tail percentiles increase as the scale of observation increases from 1 to 5 m. Best estimates of the 0.1 percentile of thermal conductivity for domain RFM045 are 2.24 W/(mK) for the 1 m scale and 2.36 W/(mK) for the 5 m scale. This can be compared with corresponding values for domain RFM029 of 2.30 W/(mK) for the 1 m scale and 2.87 W/(mK)for the 5 m scale. - The reason for the pronounced lower tail in the thermal conductivity distribution for domain RFM045 is the presence of large bodies of the low-conductive amphibolite. - The modelling results for domain RFM029 presented in model stage 2.2 are still applicable. - As temperature increases, the thermal conductivity decreases

  6. A novel two-stage stochastic programming model for uncertainty characterization in short-term optimal strategy for a distribution company

    International Nuclear Information System (INIS)

    Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima

    2016-01-01

    In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.

  7. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  8. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  9. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  11. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2004-08-31

    In the probabilistic approach for history matching, the information from the dynamic data is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, fluid properties, well configuration, flow constrains on wells etc. This implies probabilistic approach should then update different regions of the reservoir in different ways. This necessitates delineation of multiple reservoir domains in order to increase the accuracy of the approach. The research focuses on a probabilistic approach to integrate dynamic data that ensures consistency between reservoir models developed from one stage to the next. The algorithm relies on efficient parameterization of the dynamic data integration problem and permits rapid assessment of the updated reservoir model at each stage. The report also outlines various domain decomposition schemes from the perspective of increasing the accuracy of probabilistic approach of history matching. Research progress in three important areas of the project are discussed: {lg_bullet}Validation and testing the probabilistic approach to incorporating production data in reservoir models. {lg_bullet}Development of a robust scheme for identifying reservoir regions that will result in a more robust parameterization of the history matching process. {lg_bullet}Testing commercial simulators for parallel capability and development of a parallel algorithm for history matching.

  12. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  13. A novel hybrid actuation mechanism based XY nanopositioning stage with totally decoupled kinematics

    Science.gov (United States)

    Zhu, Wu-Le; Zhu, Zhiwei; Guo, Ping; Ju, Bing-Feng

    2018-01-01

    This paper reports the design, analysis and testing of a parallel two degree-of-freedom piezo-actuated compliant stage for XY nanopositioning by introducing an innovative hybrid actuation mechanism. It mainly features the combination of two Scott-Russell and a half-bridge mechanisms for double-stage displacement amplification as well as moving direction modulation. By adopting the leaf-type double parallelogram (LTDP) structures at both input and output ends of the hybrid mechanism, the lateral stiffness and dynamic characteristics are significantly improved while the parasitic motions are greatly eliminated. The XY nanopositioning stage is constructed with two orthogonally configured hybrid mechanisms along with the LTDP mechanisms for totally decoupled kinematics at both input and output ends. An analytical model was established to describe the complete elastic deformation behavior of the stage, with further verification through the finite element simulation. Finally, experiments were implemented to comprehensively evaluate both the static and dynamic performances of the proposed stage. Closed-loop control of the piezoelectric actuators (PEA) by integrating strain gauges was also conducted to effectively eliminate the nonlinear hysteresis of the stage.

  14. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  15. Eye Movement Deficits Are Consistent with a Staging Model of pTDP-43 Pathology in Amyotrophic Lateral Sclerosis.

    Directory of Open Access Journals (Sweden)

    Martin Gorges

    Full Text Available The neuropathological process underlying amyotrophic lateral sclerosis (ALS can be traced as a four-stage progression scheme of sequential corticofugal axonal spread. The examination of eye movement control gains deep insights into brain network pathology and provides the opportunity to detect both disturbance of the brainstem oculomotor circuitry as well as executive deficits of oculomotor function associated with higher brain networks.To study systematically oculomotor characteristics in ALS and its underlying network pathology in order to determine whether eye movement deterioration can be categorized within a staging system of oculomotor decline that corresponds to the neuropathological model.Sixty-eight ALS patients and 31 controls underwent video-oculographic, clinical and neuropsychological assessments.Oculomotor examinations revealed increased anti- and delayed saccades' errors, gaze-palsy and a cerebellary type of smooth pursuit disturbance. The oculomotor disturbances occurred in a sequential manner: Stage 1, only executive control of eye movements was affected. Stage 2 indicates disturbed executive control plus 'genuine' oculomotor dysfunctions such as gaze-paly. We found high correlations (p<0.001 between the oculomotor stages and both, the clinical presentation as assessed by the ALS Functional Rating Scale (ALSFRS score, and cognitive scores from the Edinburgh Cognitive and Behavioral ALS Screen (ECAS.Dysfunction of eye movement control in ALS can be characterized by a two-staged sequential pattern comprising executive deficits in Stage 1 and additional impaired infratentorial oculomotor control pathways in Stage 2. This pattern parallels the neuropathological staging of ALS and may serve as a technical marker of the neuropathological spreading.

  16. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  17. Parallelization of simulation code for liquid-gas model of lattice-gas fluid

    International Nuclear Information System (INIS)

    Kawai, Wataru; Ebihara, Kenichi; Kume, Etsuo; Watanabe, Tadashi

    2000-03-01

    A simulation code for hydrodynamical phenomena which is based on the liquid-gas model of lattice-gas fluid is parallelized by using MPI (Message Passing Interface) library. The parallelized code can be applied to the larger size of the simulations than the non-parallelized code. The calculation times of the parallelized code on VPP500 (Vector-Parallel super computer with dispersed memory units), AP3000 (Scalar-parallel server with dispersed memory units), and a workstation cluster decreased in inverse proportion to the number of processors. (author)

  18. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  19. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

  20. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  1. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported as a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.

  2. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  4. Modeling condom-use stage of change in low-income, single, urban women.

    Science.gov (United States)

    Morrison-Beedy, Dianne; Carey, Michael P; Lewis, Brian P

    2002-04-01

    This study was undertaken to identify and test a model of the cognitive antecedents to condom use stage of change in low-income, single, urban women. A convenience sample of 537 women (M=30 years old) attending two urban primary health care settings in western New York State anonymously completed questionnaires based primarily on two leading social-cognitive models, the transtheoretical model and the information-motivation-behavioral skills model. We used structural equation modeling to examine the direct and indirect effects of HIV-related knowledge, social norms of discussing HIV risk and prevention, familiarity with HIV-infected persons, general readiness to change sexual behaviors, perceived vulnerability to HIV, and pros and cons of condom use on condom-use stage of change. The results indicated two models that differ by partner type. Condom-use stage of change in women with steady main partners was influenced most by social norms and the pros of condom use. Condom-use stage of change in women with "other" types (multiple, casual, or new) of sexual partners was influenced by HIV-related knowledge, general readiness to change sexual behaviors, and the pros of condom use. These findings suggest implications for developing gender-relevant HIV-prevention interventions. Copyright 2002 Wiley Periodicals, Inc.

  5. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  6. Modeling and Control of Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan

    2012-01-01

    In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been...

  7. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  8. Two-dimensional evaluation of an ion plasma produced by pulsed lasers extracted by non-parallel collectors

    International Nuclear Information System (INIS)

    Mahdieh, M H; Gavili, A

    2003-01-01

    Two-dimensional hydrodynamics of ion extraction from quasi-neutral plasmas has been calculated numerically for non-parallel ion extractors, and the results compared with those for the parallel case. The ions were assumed to be initially uniform with a very steep density profile at the boundaries, and held between two non-parallel metal plates as cathode and anode with fixed potentials. Experimentally, tunable pulsed lasers through stepwise photo-excitation and photo-ionization or multi-photo-ionization processes can produce such plasma. Poisson's equation was solved simultaneously with the equations of mass and momentum, assuming the Maxwell-Boltzmann distribution for electrons. Ordinary Cartesian co-ordinates are not suitable for the rotated extractor geometry; therefore using the 'algebraic method' a transformation from the physical domain into the computational rectangular plane is applied for analysing the irregular boundaries. Such a technique provides adequate resolution for the boundary layer. Using a first-order explicit upwind differencing in an appropriate transformed Cartesian co-ordinate system, the hydrodynamics of the plasma ions between the two non-parallel electrodes was evaluated. In these calculations electric potential, ion density between the two electrodes, and the extraction time were assessed, considering three separate regions for the plasma, i.e. the ion sheath where (n i >>n e ∼0), the transition region (pre-sheath) (n i = n e ), and the quasi-neutral plasma (n i -n e i ). The results were compared with those for parallel electrodes. A significant discrepancy was found between the two results. From the calculation, the non-uniform asymmetric potential contour, and the ion density contour across the plasma, were obtained for the non-parallel electrodes. For comparison with the parallel extractors, we have also obtained almost the same extraction time for the non-parallel extractors

  9. A parallel input composite transimpedance amplifier

    Science.gov (United States)

    Kim, D. J.; Kim, C.

    2018-01-01

    A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.

  10. Hydrogeological conceptual model development and numerical modelling using CONNECTFLOW, Forsmark modelling stage 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven (SF GeoLogic AB, Taeby (Sweden)); Hartley, Lee; Jackson, Peter; Roberts, David (Serco TAP (United Kingdom)); Marsic, Niko (Kemakta Konsult AB, Stockholm (Sweden))

    2008-05-15

    Three versions of a site descriptive model (SDM) have been completed for the Forsmark area. Version 0 established the state of knowledge prior to the start of the site investigation programme. Version 1.1 was essentially a training exercise and was completed during 2004. Version 1.2 was a preliminary site description and concluded the initial site investigation work (ISI) in June 2005. Three modelling stages are planned for the complete site investigation work (CSI). These are labelled stage 2.1, 2.2 and 2.3, respectively. An important component of each of these stages is to address and continuously try to resolve discipline-specific uncertainties of importance for repository engineering and safety assessment. Stage 2.1 included an updated geological model for Forsmark and aimed to provide a feedback from the modelling working group to the site investigation team to enable completion of the site investigation work. Stage 2.2 described the conceptual understanding and the numerical modelling of the bedrock hydrogeology in the Forsmark area based on data freeze 2.2. The present report describes the modelling based on data freeze 2.3, which is the final data freeze in Forsmark. In comparison, data freeze 2.3 is considerably smaller than data freeze 2.2. Therefore, stage 2.3 deals primarily with model confirmation and uncertainty analysis, e.g. verification of important hypotheses made in stage 2.2 and the role of parameter uncertainty in the numerical modelling. On the whole, the work reported here constitutes an addendum to the work reported in stage 2.2. Two changes were made to the CONNECTFLOW code in stage 2.3. These serve to: 1) improve the representation of the hydraulic properties of the regolith, and 2) improve the conditioning of transmissivity of the deformation zones against single-hole hydraulic tests. The changes to the modelling of the regolith were made to improve the consistency with models made with the MIKE SHE code, which involved the introduction

  11. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  12. The Effect of Bite Registration on the Reproducibility of Parallel Periapical Radiographs Obtained with Two Month Intervals

    Directory of Open Access Journals (Sweden)

    L. Khojastehpour

    2006-06-01

    Full Text Available Statement of Problem: Digital Subtraction Radiography (DSR needs reproducible alignment between the x-ray source, the object, and the film for obtaining identical projections of the same anatomic region.Purpose: The aim of this study was to evaluate the effect of bite registrations (placed on individual bite blocks on the reproducibility of parallel periapical radiographs,obtained every 2 months, in patients undergoing periodontal surgery for furcationinvolvement.Materials and Methods: Ninety eight parallel periapical radiographs were used in this study. The radiographs were taken with individual bite-blocks attached to the beamguiding device. In order to individualize the bite blocks, bite registrations were fabricated using silicon impression material, and were placed on the individual bite blocks. All radiographs in each series were processed under similar conditions and were digitized with the flatbed scanner fitted with a transparency adaptor (hp Scanjet 7400 at 300 dpi resolution. Reproducibility of this method for obtaining similar parallel periapical radiographs was assessed by measuring the horizontal and vertical distances between two selected unchanged reference points on each radiograph and comparing them in each series. Reliability of measurements was analyzed using the one wayrandom model intraclass correlation coefficient for average of raters.Results: For both measurements (Horizontal and Vertical statistically significant reliability was found between three repeated radiographs with two month intervals in 16 patients, as well as 5 repeated radiographs with two month intervals in 10 patients (P<0.001.Conclusion: The result of this study shows that bite registration on individual bite blocks is enough for obtaining identical parallel periapical radiographs.

  13. Numerical simulations for the coal/oxidant distribution effects between two-stages for multi opposite burners (MOB) gasifier

    International Nuclear Information System (INIS)

    Unar, Imran Nazir; Wang, Lijun; Pathan, Abdul Ghani; Mahar, Rasool Bux; Li, Rundong; Uqaili, M. Aslam

    2014-01-01

    Highlights: • We simulated a double stage 3D entrained flow coal gasifier with multi-opposite burners. • The various reaction mechanisms have evaluated with experimental results. • The effects of coal and oxygen distribution between two stages on the performance of gasifier have investigated. • The local coal to oxygen ratio is affecting the overall efficiency of gasifier. - Abstract: A 3D CFD model for two-stage entrained flow dry feed coal gasifier with multi opposite burners (MOB) has been developed in this paper. At each stage two opposite nozzles are impinging whereas the two other opposite nozzles are slightly tangential. Various numerical simulations were carried out in standard CFD software to investigate the impacts of coal and oxidant distributions between the two stages of the gasifier. Chemical process was described by Finite Rate/Eddy Dissipation model. Heterogeneous and homogeneous reactions were defined using the published kinetic data and realizable k–ε turbulent model was used to solve the turbulence equations. Gas–solid interaction was defined by Euler–Lagrangian frame work. Different reaction mechanism were investigated first for the validation of the model from published experimental results. Then further investigations were made through the validated model for important parameters like species concentrations in syngas, char conversion, maximum inside temperature and syngas exit temperature. The analysis of the results from various simulated cases shows that coal/oxidant distribution between the stages has great influence on the overall performance of gasifier. The maximum char conversion was found 99.79% with coal 60% and oxygen 50% of upper level of injection. The minimum char conversion was observed 95.45% at 30% coal with 40% oxygen at same level. In general with oxygen and coal above or equal to 50% of total at upper injection level has shown an optimized performance

  14. Controlling nonsequential double ionization of Ne with parallel-polarized two-color laser pulses.

    Science.gov (United States)

    Luo, Siqiang; Ma, Xiaomeng; Xie, Hui; Li, Min; Zhou, Yueming; Cao, Wei; Lu, Peixiang

    2018-05-14

    We measure the recoil-ion momentum distributions from nonsequential double ionization of Ne by two-color laser pulses consisting of a strong 800-nm field and a weak 400-nm field with parallel polarizations. The ion momentum spectra show pronounced asymmetries in the emission direction, which depend sensitively on the relative phase of the two-color components. Moreover, the peak of the doubly charged ion momentum distribution shifts gradually with the relative phase. The shifted range is much larger than the maximal vector potential of the 400-nm laser field. Those features are well recaptured by a semiclassical model. Through analyzing the correlated electron dynamics, we found that the energy sharing between the two electrons is extremely unequal at the instant of recollison. We further show that the shift of the ion momentum corresponds to the change of the recollision time in the two-color laser field. By tuning the relative phase of the two-color components, the recollision time is controlled with attosecond precision.

  15. Markovian inventory model with two parallel queues, jockeying and impatient customers

    Directory of Open Access Journals (Sweden)

    Jeganathan K.

    2016-01-01

    Full Text Available This article presents a perishable stochastic inventory system under continuous review at a service facility consisting of two parallel queues with jockeying. Each server has its own queue, and jockeying among the queues is permitted. The capacity of each queue is of finite size L. The inventory is replenished according to an (s; S inventory policy and the replenishing times are assumed to be exponentially distributed. The individual customer is issued a demanded item after a random service time, which is distributed as negative exponential. The life time of each item is assumed to be exponential. Customers arrive according to a Poisson process and on arrival; they join the shortest feasible queue. Moreover, if the inventory level is more than one and one queue is empty while in the other queue, more than one customer are waiting, then the customer who has to be received after the customer being served in that queue is transferred to the empty queue. This will prevent one server from being idle while the customers are waiting in the other queue. The waiting customer independently reneges the system after an exponentially distributed amount of time. The joint probability distribution of the inventory level, the number of customers in both queues, and the status of the server are obtained in the steady state. Some important system performance measures in the steady state are derived, so as the long-run total expected cost rate.

  16. Professional Parallel Programming with C# Master Parallel Extensions with NET 4

    CERN Document Server

    Hillar, Gastón

    2010-01-01

    Expert guidance for those programming today's dual-core processors PCs As PC processors explode from one or two to now eight processors, there is an urgent need for programmers to master concurrent programming. This book dives deep into the latest technologies available to programmers for creating professional parallel applications using C#, .NET 4, and Visual Studio 2010. The book covers task-based programming, coordination data structures, PLINQ, thread pools, asynchronous programming model, and more. It also teaches other parallel programming techniques, such as SIMD and vectorization.Teach

  17. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  18. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.; Khan, Ayaz H.

    2017-01-01

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  19. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    Science.gov (United States)

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  1. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    Science.gov (United States)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  2. Two stage fluid bed-plasma gasification process for solid waste valorisation: Technical review and preliminary thermodynamic modelling of sulphur emissions

    International Nuclear Information System (INIS)

    Morrin, Shane; Lettieri, Paola; Chapman, Chris; Mazzei, Luca

    2012-01-01

    Highlights: ► We investigate sulphur during MSW gasification within a fluid bed-plasma process. ► We review the literature on the feed, sulphur and process principles therein. ► The need for research in this area was identified. ► We perform thermodynamic modelling of the fluid bed stage. ► Initial findings indicate the prominence of solid phase sulphur. - Abstract: Gasification of solid waste for energy has significant potential given an abundant feed supply and strong policy drivers. Nonetheless, significant ambiguities in the knowledge base are apparent. Consequently this study investigates sulphur mechanisms within a novel two stage fluid bed-plasma gasification process. This paper includes a detailed review of gasification and plasma fundamentals in relation to the specific process, along with insight on MSW based feedstock properties and sulphur pollutant therein. As a first step to understanding sulphur partitioning and speciation within the process, thermodynamic modelling of the fluid bed stage has been performed. Preliminary findings, supported by plant experience, indicate the prominence of solid phase sulphur species (as opposed to H 2 S) – Na and K based species in particular. Work is underway to further investigate and validate this.

  3. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  4. A Programming Model for Massive Data Parallelism with Data Dependencies

    International Nuclear Information System (INIS)

    Cui, Xiaohui; Mueller, Frank; Potok, Thomas E.; Zhang, Yongpeng

    2009-01-01

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For graphics processor units (GPUs), this is particularly the case when program development is aided by environments such as NVIDIA s Compute Unified Device Architecture (CUDA), which dramatically reduces the gap between domain-specific architectures and general purpose programming. Nonetheless, general-purpose GPU (GPGPU) programming remains subject to several restrictions. Most significantly, the separation of host (CPU) and accelerator (GPU) address spaces requires explicit management of GPU memory resources, especially for massive data parallelism that well exceeds the memory capacity of GPUs. One solution to this problem is to transfer data between the GPU and host memories frequently. In this work, we investigate another approach. We run massively data-parallel applications on GPU clusters. We further propose a programming model for massive data parallelism with data dependencies for this scenario. Experience from micro benchmarks and real-world applications shows that our model provides not only ease of programming but also significant performance gains

  5. Two-Stage Robust Security-Constrained Unit Commitment with Optimizable Interval of Uncertain Wind Power Output

    Directory of Open Access Journals (Sweden)

    Dayan Sun

    2017-01-01

    Full Text Available Because wind power spillage is barely considered, the existing robust unit commitment cannot accurately analyze the impacts of wind power accommodation on on/off schedules and spinning reserve requirements of conventional generators and cannot consider the network security limits. In this regard, a novel double-level robust security-constrained unit commitment formulation with optimizable interval of uncertain wind power output is firstly proposed in this paper to obtain allowable interval solutions for wind power generation and provide the optimal schedules for conventional generators to cope with the uncertainty in wind power generation. The proposed double-level model is difficult to be solved because of the invalid dual transform in solution process caused by the coupling relation between the discrete and continuous variables. Therefore, a two-stage iterative solution method based on Benders Decomposition is also presented. The proposed double-level model is transformed into a single-level and two-stage robust interval unit commitment model by eliminating the coupling relation, and then this two-stage model can be solved by Benders Decomposition iteratively. Simulation studies on a modified IEEE 26-generator reliability test system connected to a wind farm are conducted to verify the effectiveness and advantages of the proposed model and solution method.

  6. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  7. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  8. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 1

    International Nuclear Information System (INIS)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-01-01

    A layered experimental design involving mixture variables was generated to support developing property-composition models for high-level waste (HLW) glasses. The design was generated in two stages, each having unique characteristics. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. The layers were defined by single- and multi-variable constraints. The first stage involved 15 glass components treated as mixture variables. For each layer, vertices were generated and optimal design software was used to select alternative subsets of vertices and calculate design optimality measures. Two partial quadratic mixture models, containing 25 terms for the outer layer and 30 terms for the inner layer, were the basis for the optimal design calculations. Distributions of predicted glass property values were plotted and evaluated for the alternative subsets of vertices. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer to form a layered design for the first stage. The design for the second stage was selected to augment the first-stage design. The discussion of the second-stage design begins in this Part 1 and is continued in Part 2 (Cooley and Piepel, 2003b)

  9. Performance study of a heat pump driven and hollow fiber membrane-based two-stage liquid desiccant air dehumidification system

    International Nuclear Information System (INIS)

    Zhang, Ning; Yin, Shao-You; Zhang, Li-Zhi

    2016-01-01

    Graphical abstract: A heat pump driven, hollow fiber membrane-based two-stage liquid desiccant air dehumidification system. - Highlights: • A two-stage hollow fiber membrane based air dehumidification is proposed. • It is heat pump driven liquid desiccant system. • Performance is improved 20% upon single stage system. • The optimal first to second stage dehumidification area ratio is 1.4. - Abstract: A novel compression heat pump driven and hollow fiber membrane-based two-stage liquid desiccant air dehumidification system is presented. The liquid desiccant droplets are prevented from crossing over into the process air by the semi-permeable membranes. The isoenthalpic processes are changed to quasi-isothermal processes by the two-stage dehumidification processes. The system is set up and a model is proposed for simulation. Heat and mass capacities in the system, including the membrane modules, the condenser, the evaporator and the heat exchangers are modeled in detail. The model is also validated experimentally. Compared with a single-stage dehumidification system, the two-stage system has a lower solution concentration exiting from the dehumidifier and a lower condensing temperature. Thus, a better thermodynamic system performance is realized and the COP can be increased by about 20% under the typical hot and humid conditions in Southern China. The allocations of heat and mass transfer areas in the system are also investigated. It is found that the optimal regeneration to dehumidification area ratio is 1.33. The optimal first to second stage dehumidification area ratio is 1.4; and the optimal first to second stage regeneration area ratio is 1.286.

  10. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  11. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  12. Steady-state and time-dependent modelling of parallel transport in the scrape-off layer

    DEFF Research Database (Denmark)

    Havlickova, E.; Fundamenski, W.; Naulin, Volker

    2011-01-01

    The one-dimensional fluid code SOLF1D has been used for modelling of plasma transport in the scrape-off layer (SOL) along magnetic field lines, both in steady state and under transient conditions that arise due to plasma turbulence. The presented work summarizes results of SOLF1D with attention...... given to transient parallel transport which reveals two distinct time scales due to the transport mechanisms of convection and diffusion. Time-dependent modelling combined with the effect of ballooning shows propagation of particles along the magnetic field line with Mach number up to M ≈ 1...... temperature calculated in SOLF1D is compared with the approximative model used in the turbulence code ESEL both for steady-state and turbulent SOL. Dynamics of the parallel transport are investigated for a simple transient event simulating the propagation of particles and energy to the targets from a blob...

  13. Tug-of-war model for the two-bandit problem: nonlocally-correlated parallel exploration via resource conservation.

    Science.gov (United States)

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    2010-07-01

    We propose a model - the "tug-of-war (TOW) model" - to conduct unique parallel searches using many nonlocally-correlated search agents. The model is based on the property of a single-celled amoeba, the true slime mold Physarum, which maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a "nonlocal correlation" among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). This nonlocal correlation was shown to be useful for decision making in the case of a dilemma. The multi-armed bandit problem is to determine the optimal strategy for maximizing the total reward sum with incompatible demands, by either exploiting the rewards obtained using the already collected information or exploring new information for acquiring higher payoffs involving risks. Our model can efficiently manage the "exploration-exploitation dilemma" and exhibits good performances. The average accuracy rate of our model is higher than those of well-known algorithms such as the modified -greedy algorithm and modified softmax algorithm, especially, for solving relatively difficult problems. Moreover, our model flexibly adapts to changing environments, a property essential for living organisms surviving in uncertain environments.

  14. Analysis of Dynamic Behavior of Multiple-Stage Planetary Gear Train Used in Wind Driven Generator

    Directory of Open Access Journals (Sweden)

    Jungang Wang

    2014-01-01

    Full Text Available A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator.

  15. A cause and effect two-stage BSC-DEA method for measuring the relative efficiency of organizations

    Directory of Open Access Journals (Sweden)

    Seyed Esmaeel Najafi

    2011-01-01

    Full Text Available This paper presents an integration of balanced score card (BSE with two-stage data envelopment analysis (DEA. The proposed model of this paper uses different financial and non-financial perspectives to evaluate the performance of decision making units in different BSC stages. At each stage, a two-stage DEA method is implemented to measure the relative efficiency of decision making units and the results are monitored using the cause and effect relationships. An empirical study for a banking sector is also performed using the method developed in this paper and the results are briefly analyzed.

  16. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  17. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  18. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  19. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  20. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Liebrock, L.M.

    1997-01-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  1. Vortex structure behind highly heated two cylinders in parallel arrangements

    International Nuclear Information System (INIS)

    Kurita, Eiichirou; Yahagi, Yuji

    2008-01-01

    Vortex structures behind twin, highly heated cylinders in parallel arrangements have been investigated experimentally. The experiments were conducted under the following conditions: cylinder diameter, D=4 mm; mean flow velocity, U ∞ =1.0 m/s; Reynolds number, Re=250; cylinder clearance, S/D=0.5 - 1.4; and cylinder heat flux, q=0 - 72.6 kW/m 2 . For S/D > 1.2, the Karman vortex street is formed alternately behind each cylinder divided on the slit flow. The slit flow velocity increases with a decrease in S/D and decreases with increasing heat flux. For S/D 2 ). As a result, the increased local kinematic viscosity and S/D play a key role for the vortex structure and formation behind arrangements of two parallel cylinders. (author)

  2. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  3. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  4. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    Science.gov (United States)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  5. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  6. Comparative Evaluation and Case Studies of Shared-Memory and Data-Parallel Execution Patterns

    Directory of Open Access Journals (Sweden)

    Xiaodong Zhang

    1999-01-01

    Full Text Available Shared‐memory and data‐parallel programming models are two important paradigms for scientific applications. Both models provide high‐level program abstractions, and simple and uniform views of network structures. The common features of the two models significantly simplify program coding and debugging for scientific applications. However, the underlining execution and overhead patterns are significantly different between the two models due to their programming constraints, and due to different and complex structures of interconnection networks and systems which support the two models. We performed this experimental study to present implications and comparisons of execution patterns on two commercial architectures. We implemented a standard electromagnetic simulation program (EM and a linear system solver using the shared‐memory model on the KSR‐1 and the data‐parallel model on the CM‐5. Our objectives are to examine the execution pattern changes required for an implementation transformation between the two models; to study memory access patterns; to address scalability issues; and to investigate relative costs and advantages/disadvantages of using the two models for scientific computations. Our results indicate that the EM program tends to become computation‐intensive in the KSR‐1 shared‐memory system, and memory‐demanding in the CM‐5 data‐parallel system when the systems and the problems are scaled. The EM program, a highly data‐parallel program performed extremely well, and the linear system solver, a highly control‐structured program suffered significantly in the data‐parallel model on the CM‐5. Our study provides further evidence that matching execution patterns of algorithms to parallel architectures would achieve better performance.

  7. A Novel Design of 4-Class BCI Using Two Binary Classifiers and Parallel Mental Tasks

    Directory of Open Access Journals (Sweden)

    Tao Geng

    2008-01-01

    Full Text Available A novel 4-class single-trial brain computer interface (BCI based on two (rather than four or more binary linear discriminant analysis (LDA classifiers is proposed, which is called a “parallel BCI.” Unlike other BCIs where mental tasks are executed and classified in a serial way one after another, the parallel BCI uses properly designed parallel mental tasks that are executed on both sides of the subject body simultaneously, which is the main novelty of the BCI paradigm used in our experiments. Each of the two binary classifiers only classifies the mental tasks executed on one side of the subject body, and the results of the two binary classifiers are combined to give the result of the 4-class BCI. Data was recorded in experiments with both real movement and motor imagery in 3 able-bodied subjects. Artifacts were not detected or removed. Offline analysis has shown that, in some subjects, the parallel BCI can generate a higher accuracy than a conventional 4-class BCI, although both of them have used the same feature selection and classification algorithms.

  8. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  9. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  10. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  11. A Two-Stage Penalized Logistic Regression Approach to Case-Control Genome-Wide Association Studies

    Directory of Open Access Journals (Sweden)

    Jingyuan Zhao

    2012-01-01

    Full Text Available We propose a two-stage penalized logistic regression approach to case-control genome-wide association studies. This approach consists of a screening stage and a selection stage. In the screening stage, main-effect and interaction-effect features are screened by using L1-penalized logistic like-lihoods. In the selection stage, the retained features are ranked by the logistic likelihood with the smoothly clipped absolute deviation (SCAD penalty (Fan and Li, 2001 and Jeffrey’s Prior penalty (Firth, 1993, a sequence of nested candidate models are formed, and the models are assessed by a family of extended Bayesian information criteria (J. Chen and Z. Chen, 2008. The proposed approach is applied to the analysis of the prostate cancer data of the Cancer Genetic Markers of Susceptibility (CGEMS project in the National Cancer Institute, USA. Simulation studies are carried out to compare the approach with the pair-wise multiple testing approach (Marchini et al. 2005 and the LASSO-patternsearch algorithm (Shi et al. 2007.

  12. The influence of magnetic field strength in ionization stage on ion transport between two stages of a double stage Hall thruster

    International Nuclear Information System (INIS)

    Yu Daren; Song Maojiang; Li Hong; Liu Hui; Han Ke

    2012-01-01

    It is futile for a double stage Hall thruster to design a special ionization stage if the ionized ions cannot enter the acceleration stage. Based on this viewpoint, the ion transport under different magnetic field strengths in the ionization stage is investigated, and the physical mechanisms affecting the ion transport are analyzed in this paper. With a combined experimental and particle-in-cell simulation study, it is found that the ion transport between two stages is chiefly affected by the potential well, the potential barrier, and the potential drop at the bottom of potential well. With the increase of magnetic field strength in the ionization stage, there is larger plasma density caused by larger potential well. Furthermore, the potential barrier near the intermediate electrode declines first and then rises up while the potential drop at the bottom of potential well rises up first and then declines as the magnetic field strength increases in the ionization stage. Consequently, both the ion current entering the acceleration stage and the total ion current ejected from the thruster rise up first and then decline as the magnetic field strength increases in the ionization stage. Therefore, there is an optimal magnetic field strength in the ionization stage to guide the ion transport between two stages.

  13. A Parallel Two-fluid Code for Global Magnetic Reconnection Studies

    International Nuclear Information System (INIS)

    Breslau, J.A.; Jardin, S.C.

    2001-01-01

    This paper describes a new algorithm for the computation of two-dimensional resistive magnetohydrodynamic (MHD) and two-fluid studies of magnetic reconnection in plasmas. It has been implemented on several parallel platforms and shows good scalability up to 32 CPUs for reasonable problem sizes. A fixed, nonuniform rectangular mesh is used to resolve the different spatial scales in the reconnection problem. The resistive MHD version of the code uses an implicit/explicit hybrid method, while the two-fluid version uses an alternating-direction implicit (ADI) method. The technique has proven useful for comparing several different theories of collisional and collisionless reconnection

  14. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  15. Reduction of momentum transfer rates by parallel electric fields: A two-fluid demonstration

    International Nuclear Information System (INIS)

    Delamere, P.A.; Stenbaek-Nielsen, H.C.; Otto, A.

    2002-01-01

    Momentum transfer between an ionized gas cloud moving relative to an ambient magnetized plasma is a general problem in space plasma physics. Obvious examples include the Io-Jupiter interaction, comets, and coronal mass ejections. Active plasma experiments have demonstrated that momentum transfer rates associated with Alfven wave propagation are poorly understood. Barium injection experiments from the Combined Release and Radiation Effects Satellite (CRRES) have shown that dense ionized clouds are capable of ExB drifting over large distances perpendicular to the magnetic field. The CRRES 'skidding' distances were much larger than predicted by magnetohydrodynamic theory and it has been proposed that parallel electric fields were a key component in the skidding phenomenon. A two-fluid code was used to demonstrate the role of parallel electric fields in reducing momentum transfer between two distinct plasma populations. In this study, a dense plasma was initialized moving relative to an ambient plasma and perpendicular to B. Parallel electric fields were introduced via a friction term in the electron momentum equation and the collision frequency was scaled in proportion to the field-aligned current density. The simulation results showed that parallel electric fields decreased the decelerating magnetic tension force on the plasma cloud through a magnetic diffusion/reconnection process

  16. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  17. Research on Parallel Three Phase PWM Converters base on RTDS

    Science.gov (United States)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  18. Design and fabrication of a micro parallel mechanism system using MEMS technologies

    Science.gov (United States)

    Chin, Chi-Te

    A parallel mechanism is seen as an attractive method of fabricating a multi-degree of freedom micro-stage on a chip. The research team at Arizona State University has experience with several potential parallel mechanisms that would be scaled down to micron dimensions and fabricated by using the silicon process. The researcher developed a micro parallel mechanism that allows for planar motion having two translational motions and one rotational motion (e.g., x, y, theta). The mask design shown in Appendix B is an example of a planar parallel mechanism, however, this design would only have a few discrete positions given the nature of the fully extended or fully retracted electrostatic motor. The researcher proposes using a rotary motor (comb-drive actuator with gear chain system) coupled to a rack and pinion for finer increments of linear motion. The rotary motor can behave as a stepper motor by counting drive pulses, which is the basis for a simple open loop control system. This system was manufactured at the Central Regional MEMS Research Center (CMEMS), National Tsing-Hua University, and supported by the National Science Council, Taiwan. After the microstructures had been generated, the proceeding devices were released and an experiment study was performed to demonstrate the feasibility of the proposed micro-stage devices. In this dissertation, the micro electromechanical system (MEMS) fabrication technologies were introduced. The development of this parallel mechanism system will initially focus on development of a planar micro-stage. The design of the micro-stage will build on the parallel mechanism technology, which has been developed for manufacturing, assembly, and flight simulator applications. Parallel mechanism will give the maximum operating envelope with a minimum number of silicon levels. The ideally proposed mechanism should comprise of a user interface, a micro-stage and a non-silicon tool, which is difficult to accomplish by current MEMS technology

  19. An experimental two-stage rat model of lung carcinoma initiated by radon exposure

    International Nuclear Information System (INIS)

    Poncy, J.L.; Laroque, P.; Fritsch, P.; Monchaux, G.; Masse, R.; Chameaud, J.

    1992-01-01

    We present the results of a two-stage biological model of lung carcinogenesis in rats. The histogenesis of these tumors was examined, and DNA content of lung cells was measured by flow cytometry during the evolving neoplastic stage. Tumors were induced in rat lungs after radon inhalation (1600 WLM) followed by a promoter treatment; six intramuscular injections of 5,6-benzoflavone (25 mg/kg of body weight/injection) every 2 wk. Less than 3 mo after the first injection of benzoflavone, squamous cell carcinoma was observed in the lungs of all rats exposed to radon. The preneoplastic lesions gradually developed as follows: hyperplastic bronchiolar-type cells migrated to the alveoli from cells that proliferated in bronchioles and alveolar ducts; initial lesions were observed in almost all respiratory bronchioles. From some hyperplasias, epidermoid metaplasias arose distally, forming nodular epidermoid lesions in alveoli, which progressed to form squamous papilloma and, finally, epidermoid carcinomas. The histogenesis of these experimentally induced epidermoid carcinomas showed the bronchioloalveolar origin of the tumor. This factor must be considered when comparing these with human lesions; in humans, lung epidermoid carcinomas are thought to arise mainly in the first bronchial generations. The labeling index of pulmonary tissue after incorporation of 3 H-thymidine by the cells was 0.2% in control rats. This index reached a value of 1 to 2% in the hyperplastic area of the bronchioles and 10 to 15% in epidermoid nodules and epidermoid tumors, respectively. DNA cytometric analysis was performed on cell suspensions obtained after enzymatic treatment of paraffin sections of lungs from rats sacrificed during different stags of neoplastic transformations. Data showed the early appearance of a triploid cell population that grew during the evolution of nodular epidermoid lesions to epidermoid carcinomas

  20. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  1. An inexact two-stage stochastic energy systems planning model for managing greenhouse gas emission at a municipal level

    International Nuclear Information System (INIS)

    Lin, Q.G.; Huang, G.H.

    2010-01-01

    Energy management systems are highly complicated with greenhouse-gas emission reduction issues and a variety of social, economic, political, environmental and technical factors. To address such complexities, municipal energy systems planning models are desired as they can take account of these factors and their interactions within municipal energy management systems. This research is to develop an interval-parameter two-stage stochastic municipal energy systems planning model (ITS-MEM) for supporting decisions of energy systems planning and GHG (greenhouse gases) emission management at a municipal level. ITS-MEM is then applied to a case study. The results indicated that the developed model was capable of supporting municipal energy systems planning and environmental management under uncertainty. Solutions of ITS-MEM would provide an effective linkage between the pre-regulated environmental policies (GHG-emission reduction targets) and the associated economic implications (GHG-emission credit trading).

  2. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  3. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  4. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  5. Hydraulic Fracture Induced Seismicity During A Multi-Stage Pad Completion in Western Canada: Evidence of Activation of Multiple, Parallel Faults

    Science.gov (United States)

    Maxwell, S.; Garrett, D.; Huang, J.; Usher, P.; Mamer, P.

    2017-12-01

    Following reports of injection induced seismicity in the Western Canadian Sedimentary Basin, regulators have imposed seismic monitoring and traffic light protocols for fracturing operations in specific areas. Here we describe a case study in one of these reservoirs, the Montney Shale in NE British Columbia, where induced seismicity was monitored with a local array during multi-stage hydraulic fracture stimulations on several wells from a single drilling pad. Seismicity primarily occurred during the injection time periods, and correlated with periods of high injection rates and wellhead pressures above fracturing pressures. Sequential hydraulic fracture stages were found to progressively activate several parallel, critically-stressed faults, as illuminated by multiple linear hypocenter patterns in the range between Mw 1 and 3. Moment tensor inversion of larger events indicated a double-couple mechanism consistent with the regional strike-slip stress state and the hypocenter lineations. The critically-stressed faults obliquely cross the well paths which were purposely drilled parallel to the minimum principal stress direction. Seismicity on specific faults started and stopped when fracture initiation points of individual injection stages were proximal to the intersection of the fault and well. The distance ranges when the seismicity occurs is consistent with expected hydraulic fracture dimensions, suggesting that the induced fault slip only occurs when a hydraulic fracture grows directly into the fault and the faults are temporarily exposed to significantly elevated fracture pressures during the injection. Some faults crossed multiple wells and the seismicity was found to restart during injection of proximal stages on adjacent wells, progressively expanding the seismogenic zone of the fault. Progressive fault slip is therefore inferred from the seismicity migrating further along the faults during successive injection stages. An accelerometer was also deployed close

  6. Water-based squeezing flow in the presence of carbon nanotubes between two parallel disks

    Directory of Open Access Journals (Sweden)

    Haq Rizwan Ul

    2016-01-01

    Full Text Available Present study is dedicated to investigate the water functionalized carbon nanotubes squeezing flow between two parallel discs. Moreover, we have considered magnetohydrodynamics effects normal to the disks. In addition we have considered two kind of carbon nanotubes named: single wall carbon nanotubes (SWCNT and multiple wall carbon nanotubes (MWCNT with in the base fluid. Under this squeezing flow mechanism model has been constructed in the form of partial differential equation. Transformed ordinary differential equations are solved numerically with the help of Runge-Kutta-Fehlberg method. Results for velocity and temperature are constructed against all the emerging parameters. Comparison among the SWCNT and MWCNT are drawn for skin friction coefficient and local Nusselt number. Conclusion remarks are drawn under the observation of whole analysis.

  7. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  8. Metabolic profiling based on two-dimensional J-resolved 1H NMR data and parallel factor analysis

    DEFF Research Database (Denmark)

    Yilmaz, Ali; Nyberg, Nils T; Jaroszewski, Jerzy W.

    2011-01-01

    the intensity variances along the chemical shift axis are taken into account. Here, we describe the use of parallel factor analysis (PARAFAC) as a tool to preprocess a set of two-dimensional J-resolved spectra with the aim of keeping the J-coupling information intact. PARAFAC is a mathematical decomposition......-model was done automatically by evaluating amount of explained variance and core consistency values. Score plots showing the distribution of objects in relation to each other, and loading plots in the form of two-dimensional pseudo-spectra with the same appearance as the original J-resolved spectra...

  9. Spatial statistics of magnetic field in two-dimensional chaotic flow in the resistive growth stage

    Energy Technology Data Exchange (ETDEWEB)

    Kolokolov, I.V., E-mail: igor.kolokolov@gmail.com [Landau Institute for Theoretical Physics RAS, 119334, Kosygina 2, Moscow (Russian Federation); NRU Higher School of Economics, 101000, Myasnitskaya 20, Moscow (Russian Federation)

    2017-03-18

    The correlation tensors of magnetic field in a two-dimensional chaotic flow of conducting fluid are studied. It is shown that there is a stage of resistive evolution where the field correlators grow exponentially with time. The two- and four-point field correlation tensors are computed explicitly in this stage in the framework of Batchelor–Kraichnan–Kazantsev model. They demonstrate strong temporal intermittency of the field fluctuations and high level of non-Gaussianity in spatial field distribution.

  10. Modeling and optimization of parallel and distributed embedded systems

    CERN Document Server

    Munir, Arslan; Ranka, Sanjay

    2016-01-01

    This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.

  11. Condition-based maintenance effectiveness for series–parallel power generation system—A combined Markovian simulation model

    International Nuclear Information System (INIS)

    Azadeh, A.; Asadzadeh, S.M.; Salehi, N.; Firoozi, M.

    2015-01-01

    Condition-based maintenance (CBM) is an increasingly applicable policy in the competitive marketplace as a means of improving equipment reliability and efficiency. Not only has maintenance a close relationship with safety but its costs also make it even more attractive issue for researchers. This study proposes a model to evaluate the effectiveness of CBM policy compared to two other maintenance policies: Corrective Maintenance (CM) and Preventive Maintenance (PM). Maintenance policies are compared through two system performance indicators: reliability and cost. To estimate the reliability and costs of the system, the proposed Markovian discrete-event simulation model is developed under each of these policies. The applicability and usefulness of the proposed Markovian simulation model is illustrated for a series–parallel power generation system. The simulated characteristics of CBM system include its prognostics efficiency to estimate remaining useful life of the equipment. Results show that with an efficient prognostics, CBM policy is an effective strategy compared to other maintenance strategies. - Highlights: • A model is developed to evaluate the effectiveness of CBM policy. • Maintenance policies are compared through reliability and cost. • A Markovian simulation model is developed. • A series–parallel power generation system is considered. • CBM is an effective strategy compared to others

  12. Parallel family trees for transfer matrices in the Potts model

    Science.gov (United States)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster

  13. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    fate of the main corrosive compounds, in particular chlorine, was determined in an experimental campaign using fuel mixtures. The corrosion risk associated with three fuel mixtures was quite different. Grot (Norwegian term used for tree's tops and branches) was found to be a poor corrosion-reduction additive and could not serve as an alternative fuel for co-firing with straw. Peat was found to reduce the corrosive compounds only at high peat additions (50 wt%). Sewage sludge was the best alternative for corrosion reduction as 10 wt% addition almost eliminated chlorine from the fly ash. Numerical studies were also performed to estimate the emission level in the flue gas using a comprehensive mechanism in a configuration which simulated two-stage combustion of biomass. Furthermore, a reduction of the comprehensive chemical mechanism was performed since the mechanism is still complex and needs very long computational time and powerful hardware resources. The selected detailed mechanism in this study contains 81 species and 703 elementary reactions. Necessity analysis was used to determine which species and reactions that are of less importance for the predictability of the final result and, hence, can be discarded. For validation, numerical results using the derived reduced mechanism were compared with the results obtained with the original detailed mechanism. The reduced mechanism contains 35 species and 198 reactions, corresponding to 72% reduction in the number of reactions and, therefore, improving the computational time considerably. Yet the model based on the reduced mechanism predicts correctly concentrations of NOx and CO that are essentially identical to those of the complete mechanism in the range of reaction conditions of interest. The modeling conditions are selected in a way to mimic values in the different ranges of temperature, excess air ratio and residence time, since these variables are the main affecting parameters on NOx emission. (Author)

  14. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    Science.gov (United States)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  15. A Two-Pass Exact Algorithm for Selection on Parallel Disk Systems.

    Science.gov (United States)

    Mi, Tian; Rajasekaran, Sanguthevar

    2013-07-01

    Numerous OLAP queries process selection operations of "top N", median, "top 5%", in data warehousing applications. Selection is a well-studied problem that has numerous applications in the management of data and databases since, typically, any complex data query can be reduced to a series of basic operations such as sorting and selection. The parallel selection has also become an important fundamental operation, especially after parallel databases were introduced. In this paper, we present a deterministic algorithm Recursive Sampling Selection (RSS) to solve the exact out-of-core selection problem, which we show needs no more than (2 + ε ) passes ( ε being a very small fraction). We have compared our RSS algorithm with two other algorithms in the literature, namely, the Deterministic Sampling Selection and QuickSelect on the Parallel Disks Systems. Our analysis shows that DSS is a (2 + ε )-pass algorithm when the total number of input elements N is a polynomial in the memory size M (i.e., N = M c for some constant c ). While, our proposed algorithm RSS runs in (2 + ε ) passes without any assumptions. Experimental results indicate that both RSS and DSS outperform QuickSelect on the Parallel Disks Systems. Especially, the proposed algorithm RSS is more scalable and robust to handle big data when the input size is far greater than the core memory size, including the case of N ≫ M c .

  16. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    Science.gov (United States)

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  17. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  18. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  19. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  20. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  1. Diffraction of love waves by two parallel perfectly weak half planes

    International Nuclear Information System (INIS)

    Asghar, S.; Zaman, F.D.; Ayub, M.

    1986-04-01

    We consider the diffraction of Love waves by two parallel perfectly weak half planes in a layer overlying a half space. The problem is formulated in terms of the Wiener-Hopf equations in the transformed plane. The transmitted waves are then calculated using the Wiener-Hopf procedure and inverse transforms. (author)

  2. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  3. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  4. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  5. Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K

    Science.gov (United States)

    Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang

    The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.

  6. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  7. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  8. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  9. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  10. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  11. Fate of dissolved organic nitrogen in two stage trickling filter process.

    Science.gov (United States)

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Robustness analysis of a parallel two-box digital polynomial predistorter for an SOA-based CO-OFDM system

    Science.gov (United States)

    Diouf, C.; Younes, M.; Noaja, A.; Azou, S.; Telescu, M.; Morel, P.; Tanguy, N.

    2017-11-01

    The linearization performance of various digital baseband pre-distortion schemes is evaluated in this paper for a coherent optical OFDM (CO-OFDM) transmitter employing a semiconductor optical amplifier (SOA). In particular, the benefits of using a parallel two-box (PTB) behavioral model, combining a static nonlinear function with a memory polynomial (MP) model, is investigated for mitigating the system nonlinearities and compared to the memoryless and MP models. Moreover, the robustness of the predistorters under different operating conditions and system uncertainties is assessed based on a precise SOA physical model. The PTB scheme proves to be the most effective linearization technique for the considered setup, with an excellent performance-complexity tradeoff over a wide range of conditions.

  13. A two-stage DEA approach for environmental efficiency measurement.

    Science.gov (United States)

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  14. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Cao, Yachao; Zhang, Caifeng; Zhang, Dan; Zhang, Jie

    2017-09-29

    By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint) parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H) method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  15. Error Modeling and Experimental Study of a Flexible Joint 6-UPUR Parallel Six-Axis Force Sensor

    Directory of Open Access Journals (Sweden)

    Yanzhi Zhao

    2017-09-01

    Full Text Available By combining a parallel mechanism with integrated flexible joints, a large measurement range and high accuracy sensor is realized. However, the main errors of the sensor involve not only assembly errors, but also deformation errors of its flexible leg. Based on a flexible joint 6-UPUR (a kind of mechanism configuration where U-universal joint, P-prismatic joint, R-revolute joint parallel six-axis force sensor developed during the prephase, assembly and deformation error modeling and analysis of the resulting sensors with a large measurement range and high accuracy are made in this paper. First, an assembly error model is established based on the imaginary kinematic joint method and the Denavit-Hartenberg (D-H method. Next, a stiffness model is built to solve the stiffness matrix. The deformation error model of the sensor is obtained. Then, the first order kinematic influence coefficient matrix when the synthetic error is taken into account is solved. Finally, measurement and calibration experiments of the sensor composed of the hardware and software system are performed. Forced deformation of the force-measuring platform is detected by using laser interferometry and analyzed to verify the correctness of the synthetic error model. In addition, the first order kinematic influence coefficient matrix in actual circumstances is calculated. By comparing the condition numbers and square norms of the coefficient matrices, the conclusion is drawn theoretically that it is very important to take into account the synthetic error for design stage of the sensor and helpful to improve performance of the sensor in order to meet needs of actual working environments.

  16. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  17. Global Analysis of a Model of Viral Infection with Latent Stage and Two Types of Target Cells

    Directory of Open Access Journals (Sweden)

    Shuo Liu

    2013-01-01

    Full Text Available By introducing the probability function describing latency of infected cells, we unify some models of viral infection with latent stage. For the case that the probability function is a step function, which implies that the latency period of the infected cells is constant, the corresponding model is a delay differential system. The model with delay of latency and two types of target cells is investigated, and the obtained results show that when the basic reproduction number is less than or equal to unity, the infection-free equilibrium is globally stable, that is, the in-host free virus will be cleared out finally; when the basic reproduction number is greater than unity, the infection equilibrium is globally stable, that is, the viral infection will be chronic and persist in-host. And by comparing the basic reproduction numbers of ordinary differential system and the associated delayed differential system, we think that it is necessary to elect an appropriate type of probability function for predicting the final outcome of viral infection in-host.

  18. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  19. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  20. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 2

    International Nuclear Information System (INIS)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-01-01

    Part 1 (Cooley and Piepel, 2003a) describes the first stage of a two-stage experimental design to support property-composition modeling for high-level waste (HLW) glass to be produced at the Hanford Site in Washington state. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. However, the design variables and constraints defining the layers of the experimental glass composition region (EGCR) were defined differently for the second stage than for the first. The first-stage initial design involved 15 components, all treated as mixture variables. The second-stage augmentation design involved 19 components, with 14 treated as mixture variables and 5 treated as non-mixture variables. For each second-stage layer, vertices were generated and optimal design software was used to select alternative subsets of vertices for the design and calculate design optimality measures. A model containing 29 partial quadratic mixture terms plus 5 linear terms for the non-mixture variables was the basis for the optimal design calculations. Predicted property values were plotted for the alternative subsets of second-stage vertices and the first-stage design points. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer of the second-stage to augment the first-stage design

  1. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  2. The Effect of Effluent Recirculation in a Semi-Continuous Two-Stage Anaerobic Digestion System

    Directory of Open Access Journals (Sweden)

    Karthik Rajendran

    2013-06-01

    Full Text Available The effect of recirculation in increasing organic loading rate (OLR and decreasing hydraulic retention time (HRT in a semi-continuous two-stage anaerobic digestion system using stirred tank reactor (CSTR and an upflow anaerobic sludge bed (UASB was evaluated. Two-parallel processes were in operation for 100 days, one with recirculation (closed system and the other without recirculation (open system. For this purpose, two structurally different carbohydrate-based substrates were used; starch and cotton. The digestion of starch and cotton in the closed system resulted in production of 91% and 80% of the theoretical methane yield during the first 60 days. In contrast, in the open system the methane yield was decreased to 82% and 56% of the theoretical value, for starch and cotton, respectively. The OLR could successfully be increased to 4 gVS/L/day for cotton and 10 gVS/L/day for starch. It is concluded that the recirculation supports the microorganisms for effective hydrolysis of polyhydrocarbons in CSTR and to preserve the nutrients in the system at higher OLRs, thereby improving the overall performance and stability of the process.

  3. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  4. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  5. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  6. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  7. runjags: An R Package Providing Interface Utilities, Model Templates, Parallel Computing Methods and Additional Distributions for MCMC Models in JAGS

    Directory of Open Access Journals (Sweden)

    Matthew J. Denwood

    2016-07-01

    Full Text Available The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.

  8. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  9. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  10. Entropy resistance analyses of a two-stream parallel flow heat exchanger with viscous heating

    International Nuclear Information System (INIS)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Heat exchangers are widely used in industry, and analyses and optimizations of the performance of heat exchangers are important topics. In this paper, we define the concept of entropy resistance based on the entropy generation analyses of a one-dimensional heat transfer process. With this concept, a two-stream parallel flow heat exchanger with viscous heating is analyzed and discussed. It is found that the minimization of entropy resistance always leads to the maximum heat transfer rate for the discussed two-stream parallel flow heat exchanger, while the minimizations of entropy generation rate, entropy generation numbers, and revised entropy generation number do not always. (general)

  11. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    OpenAIRE

    Cunha, P.S.A.; Oliveira, F.; Raupp, Fernanda M.P.

    2017-01-01

    ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which a...

  12. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  13. Dynamics of parallel robots from rigid bodies to flexible elements

    CERN Document Server

    Briot, Sébastien

    2015-01-01

    This book starts with a short recapitulation on basic concepts, common to any types of robots (serial, tree structure, parallel, etc.), that are also necessary for computation of the dynamic models of parallel robots. Then, as dynamics requires the use of geometry and kinematics, the general equations of geometric and kinematic models of parallel robots are given. After, it is explained that parallel robot dynamic models can be obtained by decomposing the real robot into two virtual systems: a tree-structure robot (equivalent to the robot legs for which all joints would be actuated) plus a free body corresponding to the platform. Thus, the dynamics of rigid tree-structure robots is analyzed and algorithms to obtain their dynamic models in the most compact form are given. The dynamic model of the real rigid parallel robot is obtained by closing the loops through the use of the Lagrange multipliers. The problem of the dynamic model degeneracy near singularities is treated and optimal trajectory planning for cro...

  14. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  15. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  16. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  17. Parallel integer sorting with medium and fine-scale parallelism

    Science.gov (United States)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  18. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    Science.gov (United States)

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  19. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling

    Science.gov (United States)

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2010-01-01

    This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome. PMID:20157639

  20. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  1. Efficacy of single-stage and two-stage Fowler–Stephens laparoscopic orchidopexy in the treatment of intraabdominal high testis

    Directory of Open Access Journals (Sweden)

    Chang-Yuan Wang

    2017-11-01

    Conclusion: In the case of testis with good collateral circulation, single-stage F-S laparoscopic orchidopexy had the same safety and efficacy as the two-stage F-S procedure. Surgical options should be based on comprehensive consideration of intraoperative testicular location, testicular ischemia test, and collateral circumstances surrounding the testes. Under the appropriate conditions, we propose single-stage F-S laparoscopic orchidopexy be preferred. It may be appropriate to avoid unnecessary application of the two-stage procedure that has a higher cost and causes more pain for patients.

  2. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  3. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  4. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  5. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  6. The island dynamics model on parallel quadtree grids

    Science.gov (United States)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  7. A two-stage model in a Bayesian framework to estimate a survival endpoint in the presence of confounding by indication.

    Science.gov (United States)

    Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone

    2018-04-01

    Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was

  8. Numerical Prediction of CCV in a PFI Engine using a Parallel LES Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ameen, Muhsin M; Mirzaeian, Mohsen; Millo, Federico; Som, Sibendu

    2017-10-15

    Cycle-to-cycle variability (CCV) is detrimental to IC engine operation and can lead to partial burn, misfire, and knock. Predicting CCV numerically is extremely challenging due to two key reasons. Firstly, high-fidelity methods such as large eddy simulation (LES) are required to accurately resolve the incylinder turbulent flowfield both spatially and temporally. Secondly, CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. Ameen et al. (Int. J. Eng. Res., 2017) developed a parallel perturbation model (PPM) approach to dissociate this long time-scale problem into several shorter timescale problems. The strategy is to perform multiple single-cycle simulations in parallel by effectively perturbing the initial velocity field based on the intensity of the in-cylinder turbulence. This strategy was demonstrated for motored engine and it was shown that the mean and variance of the in-cylinder flowfield was captured reasonably well by this approach. In the present study, this PPM approach is extended to simulate the CCV in a fired port-fuel injected (PFI) SI engine. Two operating conditions are considered – a medium CCV operating case corresponding to 2500 rpm and 16 bar BMEP and a low CCV case corresponding to 4000 rpm and 12 bar BMEP. The predictions from this approach are also shown to be similar to the consecutive LES cycles. Both the consecutive and PPM LES cycles are observed to under-predict the variability in the early stage of combustion. The parallel approach slightly underpredicts the cyclic variability at all stages of combustion as compared to the consecutive LES cycles. However, it is shown that the parallel approach is able to predict the coefficient of variation (COV) of the in-cylinder pressure and burn rate related parameters with sufficient accuracy, and is also able to predict the qualitative trends in CCV with changing operating conditions. The convergence of the statistics

  9. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  10. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    Science.gov (United States)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of

  11. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  12. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  13. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  14. Analysis of Three-dimension Viscous Flow in the Model Axial Compressor Stage K1002L

    Science.gov (United States)

    Tribunskaia, K.; Kozhukhov, Y. V.

    2017-08-01

    The main investigation subject considered in this paper is axial compressor model stage K1002L. Three simulation models were designed: Scheme 1 - inlet stage model consisting of IGV (Inlet Guide Vane), rotor and diffuser; Scheme 2 - two-stage model: IGV, first-stage rotor, first-stage diffuser, second-stage rotor, EGV (Exit Guide Vane); Scheme 3 - full-round model: IGV, rotor, diffuser. Numerical investigation of the model stage was held for four circumferential velocities at the outer diameter (Uout=125,160,180,210 m/s) within the range of flow coefficient: ϕ = 0.4 - 0.6. The computational domain was created with ANSYS CFX Workbench. According to simulation results, there were constructed aerodynamic characteristic curves of adiabatic efficiency and the adiabatic head coefficient calculated for total parameters were compared with data from the full-scale test received at the Central Boiler and Turbine Institution (CBTI), thus, verification of the calculated data was carried out. Moreover, there were conducted the following studies: comparison of aerodynamic characteristics of the schemes 1, 2; comparison of the sector and full-round models. The analysis and conclusions are supplemented by gas-dynamic method calculation for axial compressor stages.

  15. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    Directory of Open Access Journals (Sweden)

    Nalan Baştürk

    2016-03-01

    Full Text Available This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.

  16. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-11-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  17. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-03-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  18. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  19. A New Concept of Two-Stage Multi-Element Resonant-/Cyclo-Converter for Two-Phase IM/SM Motor

    Directory of Open Access Journals (Sweden)

    Mahmud Ali Rzig Abdalmula

    2013-01-01

    Full Text Available The paper deals with a new concept of power electronic two-phase system with two-stage DC/AC/AC converter and two-phase IM/PMSM motor. The proposed system consisting of two-stage converter comprises: input resonant boost converter with AC output, two-phase half-bridge cyclo-converter commutated by HF AC input voltage, and induction or synchronous motor. Such a system with AC interlink, as a whole unit, has better properties as a 3-phase reference VSI inverter: higher efficiency due to soft switching of both converter stages, higher switching frequency, smaller dimensions and weight with lesser number of power semiconductor switches and better price. In comparison with currently used conventional system configurations the proposed system features a good efficiency of electronic converters and also has a good torque overloading of two-phase AC induction or synchronous motors. Design of two-stage multi-element resonant converter and results of simulation experiments are presented in the paper.

  20. Fast robot kinematics modeling by using a parallel simulator (PSIM)

    International Nuclear Information System (INIS)

    El-Gazzar, H.M.; Ayad, N.M.A.

    2002-01-01

    High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done

  1. Fast robot kinematics modeling by using a parallel simulator (PSIM)

    Energy Technology Data Exchange (ETDEWEB)

    El-Gazzar, H M; Ayad, N M.A. [Atomic Energy Authority, Reactor Dept., Computer and Control Lab., P.O. Box no 13759 (Egypt)

    2002-09-15

    High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done.

  2. Z-buffer image assembly processing in high parallel visualization processing

    International Nuclear Information System (INIS)

    Kaneko, Isamu; Muramatsu, Kazuhiro

    2000-03-01

    On the platform of the parallel computer with many processors, the domain decomposition method is used as a popular means of parallel processing. In these days when the simulation scale becomes much larger and takes a lot of time, the simultaneous visualization processing with the actual computation is much more needed, and especially in case of a real-time visualization, the domain decomposition technique is indispensable. In case of parallel rendering processing, the rendered results must be gathered to one processor to compose the integrated picture in the last stage. This integration is usually conducted by the method using Z-buffer values. This process, however, induces the crucial problems of much lower speed processing and local memory shortage in case of parallel processing exceeding more than several tens of processors. In this report, the two new solutions are proposed. The one is the adoption of a special operator (Reduce operator) in the parallelization process, and the other is a buffer compression by deleting the background informations. This report includes the performance results of these new techniques to investigate their effect with use of the parallel computer Paragon. (author)

  3. Modeling and control of a parallel waste heat recovery system for Euro-VI heavy-duty diesel engines

    NARCIS (Netherlands)

    Feru, E.; Willems, F.P.T.; Jager, de A.G.; Steinbuch, M.

    2014-01-01

    This paper presents the modeling and control of a waste heat recovery systemfor a Euro-VI heavy-duty truck engine. The considered waste heat recovery system consists of two parallel evaporators with expander and pumps mechanically coupled to the engine crankshaft. Compared to previous work, the

  4. Modeling and Control of a Parallel Waste Heat Recovery System for Euro-VI Heavy-Duty Diesel Engines

    NARCIS (Netherlands)

    Feru, E.; Willems, F.P.T.; Jager, B. de; Steinbuch, M.

    2014-01-01

    This paper presents the modeling and control of a waste heat recovery system for a Euro-VI heavy-duty truck engine. The considered waste heat recovery system consists of two parallel evaporators with expander and pumps mechanically coupled to the engine crankshaft. Compared to previous work, the

  5. Two species of vortices in massive gauged non-linear sigma models

    International Nuclear Information System (INIS)

    Alonso-Izquierdo, A.; Fuertes, W. García; Guilarte, J. Mateos

    2015-01-01

    Non-linear sigma models with scalar fields taking values on ℂℙ"n complex manifolds are addressed. In the simplest n=1 case, where the target manifold is the S"2 sphere, we describe the scalar fields by means of stereographic maps. In this case when the U(1) symmetry is gauged and Maxwell and mass terms are allowed, the model accommodates stable self-dual vortices of two kinds with different energies per unit length and where the Higgs field winds at the cores around the two opposite poles of the sphere. Allowing for dielectric functions in the magnetic field, similar and richer self-dual vortices of different species in the south and north charts can be found by slightly modifying the potential. Two different situations are envisaged: either the vacuum orbit lies on a parallel in the sphere, or one pole and the same parallel form the vacuum orbit. Besides the self-dual vortices of two species, there exist BPS domain walls in the second case. Replacing the Maxwell contribution of the gauge field to the action by the second Chern-Simons secondary class, only possible in (2+1)-dimensional Minkowski space-time, new BPS topological defects of two species appear. Namely, both BPS vortices and domain ribbons in the south and the north charts exist because the vacuum orbit consits of the two poles and one parallel. Formulation of the gauged ℂℙ"2 model in a reference chart shows a self-dual structure such that BPS semi-local vortices exist. The transition functions to the second or third charts break the U(1)×SU(2) semi-local symmetry, but there is still room for standard self-dual vortices of the second species. The same structures encompassing N complex scalar fields are easily generalized to gauged ℂℙ"N models.

  6. Two species of vortices in massive gauged non-linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Alonso-Izquierdo, A. [Departamento de Matemática Aplicada, Universidad de Salamanca,Facultad de Ciencias Agrarias y Ambientales, Av. Filiberto Villalobos 119, E-37008 Salamanca (Spain); Fuertes, W. García [Departamento de Física, Universidad de Oviedo, Facultad de Ciencias, Calle Calvo Sotelo s/n, E-33007 Oviedo (Spain); Guilarte, J. Mateos [Departamento de Física Fundamental, Universidad de Salamanca, Facultad de Ciencias, Plaza de la Merced, E-37008 Salamanca (Spain)

    2015-02-23

    Non-linear sigma models with scalar fields taking values on ℂℙ{sup n} complex manifolds are addressed. In the simplest n=1 case, where the target manifold is the S{sup 2} sphere, we describe the scalar fields by means of stereographic maps. In this case when the U(1) symmetry is gauged and Maxwell and mass terms are allowed, the model accommodates stable self-dual vortices of two kinds with different energies per unit length and where the Higgs field winds at the cores around the two opposite poles of the sphere. Allowing for dielectric functions in the magnetic field, similar and richer self-dual vortices of different species in the south and north charts can be found by slightly modifying the potential. Two different situations are envisaged: either the vacuum orbit lies on a parallel in the sphere, or one pole and the same parallel form the vacuum orbit. Besides the self-dual vortices of two species, there exist BPS domain walls in the second case. Replacing the Maxwell contribution of the gauge field to the action by the second Chern-Simons secondary class, only possible in (2+1)-dimensional Minkowski space-time, new BPS topological defects of two species appear. Namely, both BPS vortices and domain ribbons in the south and the north charts exist because the vacuum orbit consits of the two poles and one parallel. Formulation of the gauged ℂℙ{sup 2} model in a reference chart shows a self-dual structure such that BPS semi-local vortices exist. The transition functions to the second or third charts break the U(1)×SU(2) semi-local symmetry, but there is still room for standard self-dual vortices of the second species. The same structures encompassing N complex scalar fields are easily generalized to gauged ℂℙ{sup N} models.

  7. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  8. A comparative study of two phenomenological models of dephasing in series and parallel resistors

    International Nuclear Information System (INIS)

    Bandopadhyay, Swarnali; Chaudhuri, Debasish; Jayannavar, Arun M.

    2010-01-01

    We compare two recent phenomenological models of dephasing using a double barrier and a quantum ring geometry. While the stochastic absorption model generates controlled dephasing leading to Ohm's law for large dephasing strengths, a Gaussian random phase based statistical model shows many inconsistencies.

  9. A staged service innovation model

    NARCIS (Netherlands)

    Song, L.Z.; Song, Michael; Benedetto, Di A.C.

    2009-01-01

    Drawing from the new product development (NPD) literature, service quality literature (SERVQUAL), and empirically grounded research with 53 service innovation decision makers, we develop a staged service innovation model (SIM) for decision makers. We tested the model using empirical data from 329

  10. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  11. Train Stop Scheduling in a High-Speed Rail Network by Utilizing a Two-Stage Approach

    Directory of Open Access Journals (Sweden)

    Huiling Fu

    2012-01-01

    Full Text Available Among the most commonly used methods of scheduling train stops are practical experience and various “one-step” optimal models. These methods face problems of direct transferability and computational complexity when considering a large-scale high-speed rail (HSR network such as the one in China. This paper introduces a two-stage approach for train stop scheduling with a goal of efficiently organizing passenger traffic into a rational train stop pattern combination while retaining features of regularity, connectivity, and rapidity (RCR. Based on a three-level station classification definition, a mixed integer programming model and a train operating tactics descriptive model along with the computing algorithm are developed and presented for the two stages. A real-world numerical example is presented using the Chinese HSR network as the setting. The performance of the train stop schedule and the applicability of the proposed approach are evaluated from the perspective of maintaining RCR.

  12. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  13. Modelling distribution of evaporating CO2 in parallel minichannels

    DEFF Research Database (Denmark)

    Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian

    2010-01-01

    The effects of airflow non-uniformity and uneven inlet qualities on the performance of a minichannel evaporator with parallel channels, using CO2 as refrigerant, are investigated numerically. For this purpose a one-dimensional discretised steady-state model was developed, applying well-known empi......The effects of airflow non-uniformity and uneven inlet qualities on the performance of a minichannel evaporator with parallel channels, using CO2 as refrigerant, are investigated numerically. For this purpose a one-dimensional discretised steady-state model was developed, applying well...... to maldistribution of the refrigerant and considerable capacity reduction of the evaporator. Uneven inlet ualities to the different channels show only minor effects on the refrigerant distribution and evaporator capacity as long as the channels are vertically oriented with CO2 flowing upwards. For horizontal...... channels capacity reductions are found for both non-uniform airflow and uneven inlet qualities. For horizontal minichannels the results are very similar to those obtained using R134a as refrigerant....

  14. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  15. A Two-Stage Rural Household Demand Analysis: Microdata Evidence from Jiangsu Province, China

    OpenAIRE

    X.M. Gao; Eric J. Wailes; Gail L. Cramer

    1996-01-01

    In this paper we evaluate economic and demographic effects on China's rural household demand for nine food commodities: vegetables, pork, beef and lamb, poultry, eggs, fish, sugar, fruit, and grain; and five nonfood commodity groups: clothing, fuel, stimulants, housing, and durables. A two-stage budgeting allocation procedure is used to obtain an empirically tractable amalgamative demand system for food commodities which combine an upper-level AIDS model and a lower-level GLES as a modeling f...

  16. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  17. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  18. Real-time trajectory optimization on parallel processors

    Science.gov (United States)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  19. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto; Cheung, Sai Hung

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  20. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  1. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  2. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    Science.gov (United States)

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  3. Two technicians apply insulation to S-II second stage

    Science.gov (United States)

    1964-01-01

    Two technicians apply insulation to the outer surface of the S-II second stage booster for the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.

  4. Distribution of Evaporating CO2 in Parallel Microchannels

    DEFF Research Database (Denmark)

    Brix, Wiebke; Elmegaard, Brian

    2008-01-01

    The impact on the heat exchanger performance due to maldistribution of evaporating CO2 in parallel channels is investigated numerically. A 1D steady state simulation model of a microchannel evaporator is built using correlations from the literature to calculate frictional pressure drop and heat...... transfer coefficients. For two channels in parallel two different cases of maldistribution are studied. Firstly, the impact of a non-uniform air flow is considered, and secondly the impact of maldistribution of the two phases in the inlet manifold is investigated. The results for both cases are compared...

  5. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  6. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  7. Rock Mechanics Forsmark. Site descriptive modelling Forsmark stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Glamheden, Rune; Fredriksson, Anders (Golder Associates AB (SE)); Roeshoff, Kennert; Karlsson, Johan (Berg Bygg Konsult AB (SE)); Hakami, Hossein (Itasca Geomekanik AB (SE)); Christiansson, Rolf (Swedish Nuclear Fuel and Waste Management Co., Stockholm (SE))

    2007-12-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is undertaking site characterisation at two different locations, Forsmark and Laxemar/Simpevarp, with the objective of siting a geological repository for spent nuclear fuel. The characterisation of a site is an integrated work carried out by several disciplines including geology, rock mechanics, thermal properties, hydrogeology, hydrogeochemistry and surface systems. This report presents the rock mechanics model of the Forsmark site up to stage 2.2. The scope of work has included compilation and analysis of primary data of intact rock and fractures, estimation of the rock mass mechanical properties and estimation of the in situ state of stress at the Forsmark site. The laboratory results on intact rock and fractures in the target volume demonstrate a good quality rock mass that is strong, stiff and relatively homogeneous. The homogeneity is also supported by the lithological and the hydrogeological models. The properties of the rock mass have been initially estimated by two separate modelling approaches, one empirical and one theoretical. An overall final estimate of the rock mass properties were achieved by integrating the results from the two models via a process termed 'Harmonization'. Both the tensile tests, carried out perpendicular and parallel to the foliation, and the theoretical analyses of the rock mass properties in directions parallel and perpendicular to the major principal stress, result in parameter values almost independent of direction. This indicates that the rock mass in the target volume is isotropic. The rock mass quality in the target volume appears to be of high and uniform quality. Those portions with reduced rock mass quality that do exist are mainly related to sections with increased fracture frequency. Such sections are associated with deformation zones according to the geological description. The results of adjacent rock domains and fracture domains of the target

  8. Parallel performance of TORT on the CRAY J90: Model and measurement

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of the code

  9. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  10. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  11. A Two-Stage DEA to Analyze the Effect of Entrance Deregulation on Iranian Insurers: A Robust Approach

    OpenAIRE

    Jalali Naini, Seyed Gholamreza; Nouralizadeh, Hamid Reza

    2012-01-01

    We use two-stage data envelopment analysis (DEA) model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year...

  12. A Green's function method for two-dimensional reactive solute transport in a parallel fracture-matrix system

    Science.gov (United States)

    Chen, Kewei; Zhan, Hongbin

    2018-06-01

    The reactive solute transport in a single fracture bounded by upper and lower matrixes is a classical problem that captures the dominant factors affecting transport behavior beyond pore scale. A parallel fracture-matrix system which considers the interaction among multiple paralleled fractures is an extension to a single fracture-matrix system. The existing analytical or semi-analytical solution for solute transport in a parallel fracture-matrix simplifies the problem to various degrees, such as neglecting the transverse dispersion in the fracture and/or the longitudinal diffusion in the matrix. The difficulty of solving the full two-dimensional (2-D) problem lies in the calculation of the mass exchange between the fracture and matrix. In this study, we propose an innovative Green's function approach to address the 2-D reactive solute transport in a parallel fracture-matrix system. The flux at the interface is calculated numerically. It is found that the transverse dispersion in the fracture can be safely neglected due to the small scale of fracture aperture. However, neglecting the longitudinal matrix diffusion would overestimate the concentration profile near the solute entrance face and underestimate the concentration profile at the far side. The error caused by neglecting the longitudinal matrix diffusion decreases with increasing Peclet number. The longitudinal matrix diffusion does not have obvious influence on the concentration profile in long-term. The developed model is applied to a non-aqueous-phase-liquid (DNAPL) contamination field case in New Haven Arkose of Connecticut in USA to estimate the Trichloroethylene (TCE) behavior over 40 years. The ratio of TCE mass stored in the matrix and the injected TCE mass increases above 90% in less than 10 years.

  13. Enhancing parallelism of tile bidiagonal transformation on multicore architectures using tree reduction

    KAUST Repository

    Ltaief, Hatem

    2012-01-01

    The objective of this paper is to enhance the parallelism of the tile bidiagonal transformation using tree reduction on multicore architectures. First introduced by Ltaief et. al [LAPACK Working Note #247, 2011], the bidiagonal transformation using tile algorithms with a two-stage approach has shown very promising results on square matrices. However, for tall and skinny matrices, the inherent problem of processing the panel in a domino-like fashion generates unnecessary sequential tasks. By using tree reduction, the panel is horizontally split, which creates another dimension of parallelism and engenders many concurrent tasks to be dynamically scheduled on the available cores. The results reported in this paper are very encouraging. The new tile bidiagonal transformation, targeting tall and skinny matrices, outperforms the state-of-the-art numerical linear algebra libraries LAPACK V3.2 and Intel MKL ver. 10.3 by up to 29-fold speedup and the standard two-stage PLASMA BRD by up to 20-fold speedup, on an eight socket hexa-core AMD Opteron multicore shared-memory system. © 2012 Springer-Verlag.

  14. Unsteady free convection MHD flow between two heated vertical parallel conducting plates

    International Nuclear Information System (INIS)

    Sanyal, D.C.; Adhikari, A.

    2006-01-01

    Unsteady free convection flow of a viscous incompressible electrically conducting fluid between two heated conducting vertical parallel plates subjected to a uniform transverse magnetic field is considered. The approximate analytical solutions for velocity, induced field and temperature distribution are obtained for small and large values of magnetic Reynolds number. The problem is also extended to thermometric case. (author)

  15. Geology - Background complementary studies. Forsmark modelling stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, Michael B. [Geological Survey of Sweden, Uppsala (Sweden); Skagius, Kristina [Kemakta Konsult AB, Stockholm (Sweden)] (eds.)

    2007-09-15

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  16. Geology - Background complementary studies. Forsmark modelling stage 2.2

    International Nuclear Information System (INIS)

    Stephens, Michael B.; Skagius, Kristina

    2007-09-01

    During Forsmark model stage 2.2, seven complementary geophysical and geological studies were initiated by the geological modelling team, in direct connection with and as a background support to the deterministic modelling of deformation zones. One of these studies involved a field control on the character of two low magnetic lineaments with NNE and NE trends inside the target volume. The interpretation of these lineaments formed one of the late deliveries to SKB that took place after the data freeze for model stage 2.2 and during the initial stage of the modelling work. Six studies involved a revised processing and analysis of reflection seismic, refraction seismic and selected oriented borehole radar data, all of which had been presented earlier in connection with the site investigation programme. A prime aim of all these studies was to provide a better understanding of the geological significance of indirect geophysical data to the geological modelling team. Such essential interpretative work was lacking in the material acquired in connection with the site investigation programme. The results of these background complementary studies are published together in this report. The titles and authors of the seven background complementary studies are presented below. Summaries of the results of each study, with a focus on the implications for the geological modelling of deformation zones, are presented in the master geological report, SKB-R--07-45. The sections in the master report, where reference is made to each background complementary study and where the summaries are placed, are also provided. The individual reports are listed in the order that they are referred to in the master geological report and as they appear in this report. 1. Scan line fracture mapping and magnetic susceptibility measurements across two low magnetic lineaments with NNE and NE trend, Forsmark. Jesper Petersson, Ulf B. Andersson and Johan Berglund. 2. Integrated interpretation of surface and

  17. Large Scale Parallel DNA Detection by Two-Dimensional Solid-State Multipore Systems.

    Science.gov (United States)

    Athreya, Nagendra Bala Murali; Sarathy, Aditya; Leburton, Jean-Pierre

    2018-04-23

    We describe a scalable device design of a dense array of multiple nanopores made from nanoscale semiconductor materials to detect and identify translocations of many biomolecules in a massively parallel detection scheme. We use molecular dynamics coupled to nanoscale device simulations to illustrate the ability of this device setup to uniquely identify DNA parallel translocations. We show that the transverse sheet currents along membranes are immune to the crosstalk effects arising from simultaneous translocations of biomolecules through multiple pores, due to their ability to sense only the local potential changes. We also show that electronic sensing across the nanopore membrane offers a higher detection resolution compared to ionic current blocking technique in a multipore setup, irrespective of the irregularities that occur while fabricating the nanopores in a two-dimensional membrane.

  18. Fundamental Parallel Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Nelson, Michael

    2008-01-01

    about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present...... two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks....... In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs....

  19. Analysis of clinical complication data for radiation hepatitis using a parallel architecture model

    International Nuclear Information System (INIS)

    Jackson, A.; Haken, R.K. ten; Robertson, J.M.; Kessler, M.L.; Kutcher, G.J.; Lawrence, T.S.

    1995-01-01

    Purpose: The detailed knowledge of dose volume distributions available from the three-dimensional (3D) conformal radiation treatment of tumors in the liver (reported elsewhere) offers new opportunities to quantify the effect of volume on the probability of producing radiation hepatitis. We aim to test a new parallel architecture model of normal tissue complication probability (NTCP) with these data. Methods and Materials: Complication data and dose volume histograms from a total of 93 patients with normal liver function, treated on a prospective protocol with 3D conformal radiation therapy and intraarterial hepatic fluorodeoxyuridine, were analyzed with a new parallel architecture model. Patient treatment fell into six categories differing in doses delivered and volumes irradiated. By modeling the radiosensitivity of liver subunits, we are able to use dose volume histograms to calculate the fraction of the liver damaged in each patient. A complication results if this fraction exceeds the patient's functional reserve. To determine the patient distribution of functional reserves and the subunit radiosensitivity, the maximum likelihood method was used to fit the observed complication data. Results: The parallel model fit the complication data well, although uncertainties on the functional reserve distribution and subunit radiosensitivy are highly correlated. Conclusion: The observed radiation hepatitis complications show a threshold effect that can be described well with a parallel architecture model. However, additional independent studies are required to better determine the parameters defining the functional reserve distribution and subunit radiosensitivity

  20. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  1. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  2. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Jafarpour, Saber [University of California, Santa Barbara; Bullo, Francesco [University of California, Santa Barbara; Dhople, Sairaj [University of Minnesota

    2017-08-31

    Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.

  3. Design and Analysis of a Split Deswirl Vane in a Two-Stage Refrigeration Centrifugal Compressor

    Directory of Open Access Journals (Sweden)

    Jeng-Min Huang

    2014-09-01

    Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.

  4. OpenMP parallelization of a gridded SWAT (SWATG)

    Science.gov (United States)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  5. He{sup 2+} HEATING VIA PARAMETRIC INSTABILITIES OF PARALLEL PROPAGATING ALFVÉN WAVES WITH AN INCOHERENT SPECTRUM

    Energy Technology Data Exchange (ETDEWEB)

    He, Peng; Gao, Xinliang; Lu, Quanming; Wang, Shui, E-mail: gaoxl@mail.ustc.edu.cn [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Science, University of Science and Technology of China, Hefei 230026 (China)

    2016-08-10

    The preferential heating of heavy ions in the solar corona and solar wind has been a long-standing hot topic. In this paper we use a one-dimensional hybrid simulation model to investigate the heating of He{sup 2+} particles during the parametric instabilities of parallel propagating Alfvén waves with an incoherent spectrum. The evolution of the parametric instabilities has two stages and involves the heavy ion heating during the entire evolution. In the first stage, the density fluctuations are generated by the modulation of the pump Alfvén waves with a spectrum, which then results in rapid coupling with the pump Alfvén waves and the cascade of the magnetic fluctuations. In the second stage, each pump Alfvén wave decays into a forward density mode and a backward daughter Alfvén mode, which is similar to that of a monochromatic pump Alfvén wave. In both stages the perpendicular heating of He{sup 2+} particles occurs. This is caused by the cyclotron resonance between He{sup 2+} particles and the high-frequency magnetic fluctuations, whereas the Landau resonance between He{sup 2+} particles and the density fluctuations leads to the parallel heating of He{sup 2+} particles. The influence of the drift velocity between the protons and the He{sup 2+} particles on the heating of He{sup 2+} particles is also discussed in this paper.

  6. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  7. Toward a model framework of generalized parallel componential processing of multi-symbol numbers.

    Science.gov (United States)

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-05-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).

  8. Pilot investigation of two-stage biofiltration for removal of natural organic matter in drinking water treatment.

    Science.gov (United States)

    Fu, Jie; Lee, Wan-Ning; Coleman, Clark; Meyer, Melissa; Carter, Jason; Nowack, Kirk; Huang, Ching-Hua

    2017-01-01

    A pilot study employing two parallel trains of two-stage biofiltration, i.e., a sand/anthracite (SA) biofilter followed by a biologically-active granular activated carbon (GAC) contactor, was conducted to test the efficiency, feasibility and stability of biofiltration for removing natural organic matter (NOM) after coagulation in a drinking water treatment plant. Results showed the biofiltration process could effectively remove turbidity (24% of dissolved organic carbon (DOC), >57% of UV 254 , and >44% of SUVA 254 ), where the SA biofilters showed a strong capacity for turbidity removal, while the GAC contactors played the dominant role in NOM removal. The vertical profile of water quality in the GAC contactors indicated the middle-upper portion was the critical zone for the removal of NOM, where relatively higher adsorption and enhanced biological removal were afforded. Fluorescence excitation-emission matrix (EEM) analysis of NOM showed that the GAC contactors effectively decreased the content of humic-like component, while protein-like component was refractory for the biofiltration process. Nutrients (NH 4 -N and PO 4 -P) supplementation applied upstream of one of the two-stage biofiltration trains (called engineered biofiltration) stimulated the growth of microorganisms, and showed a modest effect on promoting the biological removal of small non-aromatic compositions in NOM. Redundancy analysis (RDA) indicated influent UV 254 was the most explanatory water quality parameter for GAC contactors' treatment performance, and a high load of UV 254 would result in significantly reduced removals of UV 254 and SUVA 254 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. New physics beyond the standard model of particle physics and parallel universes

    Energy Technology Data Exchange (ETDEWEB)

    Plaga, R. [Franzstr. 40, 53111 Bonn (Germany)]. E-mail: rainer.plaga@gmx.de

    2006-03-09

    It is shown that if-and only if-'parallel universes' exist, an electroweak vacuum that is expected to have decayed since the big bang with a high probability might exist. It would neither necessarily render our existence unlikely nor could it be observed. In this special case the observation of certain combinations of Higgs-boson and top-quark masses-for which the standard model predicts such a decay-cannot be interpreted as evidence for new physics at low energy scales. The question of whether parallel universes exist is of interest to our understanding of the standard model of particle physics.

  10. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  11. What makes us think? A three-stage dual-process model of analytic engagement.

    Science.gov (United States)

    Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J

    2015-08-01

    The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Integrated Task And Data Parallel Programming: Language Design

    Science.gov (United States)

    Grimshaw, Andrew S.; West, Emily A.

    1998-01-01

    his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated

  13. Connectionist Models and Parallelism in High Level Vision.

    Science.gov (United States)

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  14. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  15. Two-stage open-loop velocity compensating method applied to multi-mass elastic transmission system

    Directory of Open Access Journals (Sweden)

    Zhang Deli

    2014-02-01

    Full Text Available In this paper, a novel vibration-suppression open-loop control method for multi-mass system is proposed, which uses two-stage velocity compensating algorithm and fuzzy I + P controller. This compensating method is based on model-based control theory in order to provide a damping effect on the system mechanical part. The mathematical model of multi-mass system is built and reduced to estimate the velocities of masses. The velocity difference between adjacent masses is calculated dynamically. A 3-mass system is regarded as the composition of two 2-mass systems in order to realize the two-stage compensating algorithm. Instead of using a typical PI controller in the velocity compensating loop, a fuzzy I + P controller is designed and its input variables are decided according to their impact on the system, which is different from the conventional fuzzy PID controller designing rules. Simulations and experimental results show that the proposed velocity compensating method is effective in suppressing vibration on a 3-mass system and it has a better performance when the designed fuzzy I + P controller is utilized in the control system.

  16. Methane and Environmental Change during the Paleocene-Eocene Thermal Maximum (PETM): Modeling the PETM Onset as a Two-stage Event

    Science.gov (United States)

    Carozza, David A.; Mysak, Lawrence A.; Schmidt, Gavin A.

    2011-01-01

    An atmospheric CH4 box model coupled to a global carbon cycle box model is used to constrain the carbon emission associated with the PETM and assess the role of CH4 during this event. A range of atmospheric and oceanic emission scenarios representing different amounts, rates, and isotopic signatures of emitted carbon are used to model the PETM onset. The first 3 kyr of the onset, a pre-isotope excursion stage, is simulated by the atmospheric release of 900 to 1100 Pg C CH4 with a delta C-13 of -22 to - 30 %. For a global average warming of 3 deg C, a release of CO2 to the ocean and CH4 to the atmosphere totalling 900 to 1400 Pg C, with a delta C-13 of -50 to -60%, simulates the subsequent 1 -kyr isotope excursion stage. To explain the observations, the carbon must have been released over at most 500 years. The first stage results cannot be associated with any known PETM hypothesis. However, the second stage results are consistent with a methane hydrate source. More than a single source of carbon is required to explain the PETM onset.

  17. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  18. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  19. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  20. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).