WorldWideScience

Sample records for two-stage parallel model

  1. Running Efficiency and S&T Contribution to Regional Wastes' Treatment in China based on Parallel and Two-stage DEA models

    Directory of Open Access Journals (Sweden)

    Ning Li

    2013-05-01

    Full Text Available In this study, we apply parallel and two-stage DEA models to measure the running efficiency and S&T contribution to regional wastes' treatment in China. The process of harshly development in industry often sacrificed natural living environment of human being. Because of greenhouse effect, poor air and water quality, improper disposed solid waste and other environmental pollution problems, regional environment are bearing tremendous pressure. To relieve pressure on environment and keep sustainable development in China, decision makers begin to focus on the optimal measures of ecological environment. A novel parallel and two-stage DEA models were applied to evaluate the efficiency of regional wastes' treatment in China. While the status of wastes can be divided into three types, i.e. waste water, gas and solid wastes, we classified different types of treatments into three modes. Then, the multiple parallel DEA methodology is applied to calculate the treatment efficiency of these three modes of wastes' treatment in 30 provincial regions in China. Taking S&T inputs as a pivotal effect on wastes' treatments, two-stage DEA model was applied to calculate S&T contribution rate to wastes' treatment in 30 provincial regions in China. Based on the calculation results, decision making information can be drawn for each region in China and.

  2. Two-Stage Modelling Of Random Phenomena

    Science.gov (United States)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  3. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  4. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  5. Dynamic Modelling of the Two-stage Gasification Process

    DEFF Research Database (Denmark)

    Gøbel, Benny; Henriksen, Ulrik B.; Houbak, Niels

    1999-01-01

    A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

  6. Two-Stage Bulk Electron Heating in the Diffusion Region of Anti-Parallel Symmetric Reconnection

    CERN Document Server

    Le, Ari; Daughton, William

    2016-01-01

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  7. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  8. Research on Two-channel Interleaved Two-stage Paralleled Buck DC-DC Converter for Plasma Cutting Power Supply

    DEFF Research Database (Denmark)

    Yang, Xi-jun; Qu, Hao; Yao, Chen

    2014-01-01

    As for high power plasma power supply, due to high efficiency and flexibility, multi-channel interleaved multi-stage paralleled Buck DC-DC Converter becomes the first choice. In the paper, two-channel interleaved two- stage paralleled Buck DC-DC Converter powered by three-phase AC power supply...

  9. Stabilizing effect of cannibalism in a two stages population model.

    Science.gov (United States)

    Rault, Jonathan; Benoît, Eric; Gouzé, Jean-Luc

    2013-03-01

    In this paper we build a prey-predator model with discrete weight structure for the predator. This model will conserve the number of individuals and the biomass and both growth and reproduction of the predator will depend on the food ingested. Moreover the model allows cannibalism which means that the predator can eat the prey but also other predators. We will focus on a simple version with two weight classes or stage (larvae and adults) and present some general mathematical results. In the last part, we will assume that the dynamics of the prey is fast compared to the predator's one to go further in the results and eventually conclude that under some conditions, cannibalism can stabilize the system: more precisely, an unstable equilibrium without cannibalism will become almost globally stable with some cannibalism. Some numerical simulations are done to illustrate this result.

  10. The Sausage Machine: A New Two-Stage Parsing Model.

    Science.gov (United States)

    Frazier, Lyn; Fodor, Janet Dean

    1978-01-01

    The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)

  11. TWO-STAGE DECISION MODEL OF SOY FOOD CONSUMPTION BEHAVIOR

    OpenAIRE

    Rimal, Arbindra; Balasubramanian, Siva K.; Moon, Wanki

    2004-01-01

    Our study examined the role of soy health benefits in consumers' soy consumption decision. Given the large number of respondents who reported no consumption of soy products per month, it was important to model the decision of whether or not to participate in soy market separately from the consumption intensity decision. Estimation results demonstrate that knowledge of health benefits affects both the likelihood of participation and consumption intensity. That is, consumers with higher soy hea...

  12. Loss Function Based Ranking in Two-Stage, Hierarchical Models

    Science.gov (United States)

    Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg

    2009-01-01

    Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL

  13. A comprehensive study of task coalescing for selecting parallelism granularity in a two-stage bidiagonal reduction

    KAUST Repository

    Haidar, Azzam

    2012-05-01

    We present new high performance numerical kernels combined with advanced optimization techniques that significantly increase the performance of parallel bidiagonal reduction. Our approach is based on developing efficient fine-grained computational tasks as well as reducing overheads associated with their high-level scheduling during the so-called bulge chasing procedure that is an essential phase of a scalable bidiagonalization procedure. In essence, we coalesce multiple tasks in a way that reduces the time needed to switch execution context between the scheduler and useful computational tasks. At the same time, we maintain the crucial information about the tasks and their data dependencies between the coalescing groups. This is the necessary condition to preserve numerical correctness of the computation. We show our annihilation strategy based on multiple applications of single orthogonal reflectors. Despite non-trivial characteristics in computational complexity and memory access patterns, our optimization approach smoothly applies to the annihilation scenario. The coalescing positively influences another equally important aspect of the bulge chasing stage: the memory reuse. For the tasks within the coalescing groups, the data is retained in high levels of the cache hierarchy and, as a consequence, operations that are normally memory-bound increase their ratio of computation to off-chip communication and become compute-bound which renders them amenable to efficient execution on multicore architectures. The performance for the new two-stage bidiagonal reduction is staggering. Our implementation results in up to 50-fold and 12-fold improvement (∼130 Gflop/s) compared to the equivalent routines from LAPACK V3.2 and Intel MKL V10.3, respectively, on an eight socket hexa-core AMD Opteron multicore shared-memory system with a matrix size of 24000 x 24000. Last but not least, we provide a comprehensive study on the impact of the coalescing group size in terms of cache

  14. The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models

    Directory of Open Access Journals (Sweden)

    Erol Egrioglu

    2011-01-01

    Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

  15. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

  16. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

  17. Perceived Health Benefits and Soy Consumption Behavior: Two-Stage Decision Model Approach

    OpenAIRE

    Moon, Wanki; Balasubramanian, Siva K.; Rimal, Arbindra

    2005-01-01

    A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

  18. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  19. A two-stage broadcast message propagation model in social networks

    Science.gov (United States)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  20. Colorimetric characterization of liquid crystal display using an improved two-stage model

    Institute of Scientific and Technical Information of China (English)

    Yong Wang; Haisong Xu

    2006-01-01

    @@ An improved two-stage model of colorimetric characterization for liquid crystal display (LCD) was proposed. The model included an S-shape nonlinear function with four coefficients for each channel to fit the Tone reproduction curve (TRC), and a linear transfer matrix with black-level correction. To compare with the simple model (SM), gain-offset-gain (GOG), S-curve and three-one-dimensional look-up tables (3-1D LUTs) models, an identical LCD was characterized and the color differences were calculated and summarized using the set of 7 × 7 × 7 digital-to-analog converter (DAC) triplets as test data. The experimental results showed that the model was outperformed in comparison with the GOG and SM ones, and near to that of the S-curve model and 3-1D LUTs method.

  1. STOCHASTIC DISCRETE MODEL OF TWO-STAGE ISOLATION SYSTEM WITH RIGID LIMITERS

    Institute of Scientific and Technical Information of China (English)

    HE Hua; FENG Qi; SHEN Rong-ying; WANG Yu

    2006-01-01

    The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

  2. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    Science.gov (United States)

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

  3. Two-Stage Single-Compartment Models to Evaluate Dissolution in the Lower Intestine.

    Science.gov (United States)

    Markopoulos, Constantinos; Vertzoni, Maria; Symillides, Mira; Kesisoglou, Filippos; Reppas, Christos

    2015-09-01

    The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

  4. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...... estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15 and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data...... was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10 d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation...

  5. Prey-Predator Model with Two-Stage Infection in Prey: Concerning Pest Control

    Directory of Open Access Journals (Sweden)

    Swapan Kumar Nandi

    2015-01-01

    Full Text Available A prey-predator model system is developed; specifically the disease is considered into the prey population. Here the prey population is taken as pest and the predators consume the selected pest. Moreover, we assume that the prey species is infected with a viral disease forming into susceptible and two-stage infected classes, and the early stage of infected prey is more vulnerable to predation by the predator. Also, it is assumed that the later stage of infected pests is not eaten by the predator. Different equilibria of the system are investigated and their stability analysis and Hopf bifurcation of the system around the interior equilibriums are discussed. A modified model has been constructed by considering some alternative source of food for the predator population and the dynamical behavior of the modified model has been investigated. We have demonstrated the analytical results by numerical analysis by taking some simulated set of parameter values.

  6. Planning an Agricultural Water Resources Management System: A Two-Stage Stochastic Fractional Programming Model

    Directory of Open Access Journals (Sweden)

    Liang Cui

    2015-07-01

    Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

  7. Beyond two-stage models for lung carcinogenesis in the Mayak workers: implications for plutonium risk.

    Directory of Open Access Journals (Sweden)

    Sascha Zöllner

    Full Text Available Mechanistic multi-stage models are used to analyze lung-cancer mortality after Plutonium exposure in the Mayak-workers cohort, with follow-up until 2008. Besides the established two-stage model with clonal expansion, models with three mutation stages as well as a model with two distinct pathways to cancer are studied. The results suggest that three-stage models offer an improved description of the data. The best-fitting models point to a mechanism where radiation increases the rate of clonal expansion. This is interpreted in terms of changes in cell-cycle control mediated by bystander signaling or repopulation following cell killing. No statistical evidence for a two-pathway model is found. To elucidate the implications of the different models for radiation risk, several exposure scenarios are studied. Models with a radiation effect at an early stage show a delayed response and a pronounced drop-off with older ages at exposure. Moreover, the dose-response relationship is strongly nonlinear for all three-stage models, revealing a marked increase above a critical dose.

  8. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  9. Modified landfill gas generation rate model of first-order kinetics and two-stage reaction

    Institute of Scientific and Technical Information of China (English)

    Jiajun CHEN; Hao WANG; Na ZHANG

    2009-01-01

    This investigation was carried out to establish a new domestic landfill gas (LFG) generation rate model that takes into account the impact ofleachate recirculation. The first-order kinetics and two-stage reaction (FKTSR) model of the LFG generation rate includes mechanisms of the nutrient balance for biochemical reaction in two main stages. In this study, the FKTSR model was modified by the introduction of the outflow function and the organic acid conversion coefficient in order to represent the in-situ condition of nutrient loss through leachate. Laboratory experiments were carried out to simulate the impact of leachate recirculation and verify the modified FKTSR model. The model calibration was then calculated by using the experimental data. The results suggested that the new model was in line with the experimental data. The main parameters of the modified FKTSR model, including the LFG production potential (L0), the reaction rate constant in the first stage (K1), and the reaction rate constant in the second stage (K2) of 64.746 L, 0.202 d-1, and 0.338 d-1,respectively, were comparable to the old ones of 42.069 L,0.231 d-1, and 0.231 d-1. The new model is better able to explain the mechanisms involved in LFG generation.

  10. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  11. Two-stage-six-objective calibration of a hydrodynamic-based sediment transport model for the Mekong Delta

    Science.gov (United States)

    Viet Dung, Nguyen; Van Manh, Nguyen; Merz, Bruno; Apel, Heiko

    2014-05-01

    formulate a two-stage calibration by firstly estimating parameters for the HD module to fit the three objective functions and then estimating the parameters for the AD module the fit the remaining three objective functions. We developed a wrapper code implementing the parallel version of NSGA II, controlling the whole calibration process of both stages. This reduces the computational time significantly and facilitates the calibration in due time. The calibration results imply that the proposed two-stage-six-objective calibration procedure provides meaningful parameter sets fulfilling the different objectives in a Pareto sense, even for such a large scale 2D quasi hydrodynamic-based sediment transport model within a highly complex study domain like the Mekong Delta.

  12. Predictive Modeling of a Two-Stage Gearbox towards Fault Detection

    Directory of Open Access Journals (Sweden)

    Edward J. Diehl

    2016-01-01

    Full Text Available This paper presents a systematic approach to the modeling and analysis of a benchmark two-stage gearbox test bed to characterize gear fault signatures when processed with harmonic wavelet transform (HWT analysis. The eventual goal of condition monitoring is to be able to interpret vibration signals from nonstationary machinery in order to identify the type and severity of gear damage. To advance towards this goal, a lumped-parameter model that can be analyzed efficiently is developed which characterizes the gearbox vibratory response at the system level. The model parameters are identified through correlated numerical and experimental investigations. The model fidelity is validated first by spectrum analysis, using constant speed experimental data, and secondly by HWT analysis, using nonstationary experimental data. Model prediction and experimental data are compared for healthy gear operation and a seeded fault gear with a missing tooth. The comparison confirms that both the frequency content and the predicted, relative response magnitudes match with physical measurements. The research demonstrates that the modeling method in combination with the HWT data analysis has the potential for facilitating successful fault detection and diagnosis for gearbox systems.

  13. THE MATHEMATICAL MODEL DEVELOPMENT OF THE ETHYLBENZENE DEHYDROGENATION PROCESS KINETICS IN A TWO-STAGE ADIABATIC CONTINUOUS REACTOR

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2015-01-01

    Full Text Available The article is devoted to the mathematical modeling of the kinetics of ethyl benzene dehydrogenation in a two-stage adiabatic reactor with a catalytic bed functioning on continuous technology. The analysis of chemical reactions taking place parallel to the main reaction of styrene formation has been carried out on the basis of which a number of assumptions were made proceeding from which a kinetic scheme describing the mechanism of the chemical reactions during the dehydrogenation process was developed. A mathematical model of the dehydrogenation process, describing the dynamics of chemical reactions taking place in each of the two stages of the reactor block at a constant temperature is developed. The estimation of the rate constants of direct and reverse reactions of each component, formation and exhaustion of the reacted mixture was made. The dynamics of the starting material concentration variations (ethyl benzene batch was obtained as well as styrene formation dynamics and all byproducts of dehydrogenation (benzene, toluene, ethylene, carbon, hydrogen, ect.. The calculated the variations of the component composition of the reaction mixture during its passage through the first and second stages of the reactor showed that the proposed mathematical description adequately reproduces the kinetics of the process under investigation. This demonstrates the advantage of the developed model, as well as loyalty to the values found for the rate constants of reactions, which enable the use of models for calculating the kinetics of ethyl benzene dehydrogenation under nonisothermal mode in order to determine the optimal temperature trajectory of the reactor operation. In the future, it will reduce energy and resource consumption, increase the volume of produced styrene and improve the economic indexes of the process.

  14. Focused ultrasound simultaneous irradiation/MRI imaging, and two-stage general kinetic model.

    Directory of Open Access Journals (Sweden)

    Sheng-Yao Huang

    Full Text Available Many studies have investigated how to use focused ultrasound (FUS to temporarily disrupt the blood-brain barrier (BBB in order to facilitate the delivery of medication into lesion sites in the brain. In this study, through the setup of a real-time system, FUS irradiation and injections of ultrasound contrast agent (UCA and Gadodiamide (Gd, an MRI contrast agent can be conducted simultaneously during MRI scanning. By using this real-time system, we were able to investigate in detail how the general kinetic model (GKM is used to estimate Gd penetration in the FUS irradiated area in a rat's brain resulting from UCA concentration changes after single FUS irradiation. Two-stage GKM was proposed to estimate the Gd penetration in the FUS irradiated area in a rat's brain under experimental conditions with repeated FUS irradiation combined with different UCA concentrations. The results showed that the focal increase in the transfer rate constant of Ktrans caused by BBB disruption was dependent on the doses of UCA. Moreover, the amount of in vivo penetration of Evans blue in the FUS irradiated area in a rat's brain under various FUS irradiation experimental conditions was assessed to show the positive correlation with the transfer rate constants. Compared to the GKM method, the Two-stage GKM is more suitable for estimating the transfer rate constants of the brain treated with repeated FUS irradiations. This study demonstrated that the entire process of BBB disrupted by FUS could be quantitatively monitored by real-time dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.

  15. Modelling of two-stage anaerobic digestion using the IWA Anaerobic Digestion Model No. 1 (ADM1).

    Science.gov (United States)

    Blumensaat, F; Keller, J

    2005-01-01

    The aim of the study presented was to implement a process model to simulate the dynamic behaviour of a pilot-scale process for anaerobic two-stage digestion of sewage sludge. The model implemented was initiated to support experimental investigations of the anaerobic two-stage digestion process. The model concept implemented in the simulation software package MATLAB/Simulink is a derivative of the IWA Anaerobic Digestion Model No.1 (ADM1) that has been developed by the IWA task group for mathematical modelling of anaerobic processes. In the present study the original model concept has been adapted and applied to replicate a two-stage digestion process. Testing procedures, including balance checks and 'benchmarking' tests were carried out to verify the accuracy of the implementation. These combined measures ensured a faultless model implementation without numerical inconsistencies. Parameters for both, the thermophilic and the mesophilic process stage, have been estimated successfully using data from lab-scale experiments described in literature. Due to the high number of parameters in the structured model, it was necessary to develop a customised procedure that limited the range of parameters to be estimated. The accuracy of the optimised parameter sets has been assessed against experimental data from pilot-scale experiments. Under these conditions, the model predicted reasonably well the dynamic behaviour of a two-stage digestion process in pilot scale.

  16. Flexible distributions for triple-goal estimates in two-stage hierarchical models

    Science.gov (United States)

    Paddock, Susan M.; Ridgeway, Greg; Lin, Rongheng; Louis, Thomas A.

    2009-01-01

    Performance evaluations often aim to achieve goals such as obtaining estimates of unit-specific means, ranks, and the distribution of unit-specific parameters. The Bayesian approach provides a powerful way to structure models for achieving these goals. While no single estimate can be optimal for achieving all three inferential goals, the communication and credibility of results will be enhanced by reporting a single estimate that performs well for all three. Triple goal estimates [Shen and Louis, 1998. Triple-goal estimates in two-stage hierarchical models. J. Roy. Statist. Soc. Ser. B 60, 455–471] have this performance and are appealing for performance evaluations. Because triple-goal estimates rely more heavily on the entire distribution than do posterior means, they are more sensitive to misspecification of the population distribution and we present various strategies to robustify triple-goal estimates by using nonparametric distributions. We evaluate performance based on the correctness and efficiency of the robustified estimates under several scenarios and compare empirical Bayes and fully Bayesian approaches to model the population distribution. We find that when data are quite informative, conclusions are robust to model misspecification. However, with less information in the data, conclusions can be quite sensitive to the choice of population distribution. Generally, use of a nonparametric distribution pays very little in efficiency when a parametric population distribution is valid, but successfully protects against model misspecification. PMID:19603088

  17. A two-stage cascade model of BOLD responses in human visual cortex.

    Directory of Open Access Journals (Sweden)

    Kendrick N Kay

    Full Text Available Visual neuroscientists have discovered fundamental properties of neural representation through careful analysis of responses to controlled stimuli. Typically, different properties are studied and modeled separately. To integrate our knowledge, it is necessary to build general models that begin with an input image and predict responses to a wide range of stimuli. In this study, we develop a model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD responses in early visual cortex as output. The model has a cascade architecture, consisting of two stages of linear and nonlinear operations. The first stage involves well-established computations-local oriented filters and divisive normalization-whereas the second stage involves novel computations-compressive spatial summation (a form of normalization and a variance-like nonlinearity that generates selectivity for second-order contrast. The parameters of the model, which are estimated from BOLD data, vary systematically across visual field maps: compared to primary visual cortex, extrastriate maps generally have larger receptive field size, stronger levels of normalization, and increased selectivity for second-order contrast. Our results provide insight into how stimuli are encoded and transformed in successive stages of visual processing.

  18. New higher-order Godunov code for modelling performance of two-stage light gas guns

    Science.gov (United States)

    Bogdanoff, D. W.; Miller, R. J.

    1995-01-01

    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  19. A cooperation model based on CVaR measure for a two-stage supply chain

    Science.gov (United States)

    Xu, Xinsheng; Meng, Zhiqing; Shen, Rui

    2015-07-01

    In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.

  20. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  1. An Enhanced Droop Control Scheme for Resilient Active Power Sharing in Paralleled Two-Stage PV Inverter Systems

    DEFF Research Database (Denmark)

    Liu, Hongpeng; Yang, Yongheng; Wang, Xiongfei

    2016-01-01

    generation) due to the intermittency. In that case, unbalance in active power generation may occur among the paralleled systems. Additionally, most droop-controlled systems have been assumed to be a single dc-ac inverter with a fixed dc input source. The dc-dc converter as the front-end of a two......Traditional droop-controlled systems assume that the generators are able to provide sufficient power as required. This is however not always true, especially in renewable systems, where the energy sources (e.g., photovoltaic source) may not be able to provide enough power (or even loss of power...... strategy is carried out. Experiments have verified the effectiveness of the proposed droop control scheme....

  2. A two-stage logistic regression-ANN model for the prediction of distress banks: Evidence from 11 emerging countries

    National Research Council Canada - National Science Library

    Shu Ling Lin

    2010-01-01

      This paper proposes a new approach of two-stage hybrid model of logistic regression-ANN for the construction of a financial distress warning system for banking industry in emerging market during 1998-2006...

  3. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  4. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  5. Viroporins, Examples of the Two-Stage Membrane Protein Folding Model

    Directory of Open Access Journals (Sweden)

    Luis Martinez-Gil

    2015-06-01

    Full Text Available Viroporins are small, α-helical, hydrophobic virus encoded proteins, engineered to form homo-oligomeric hydrophilic pores in the host membrane. Viroporins participate in multiple steps of the viral life cycle, from entry to budding. As any other membrane protein, viroporins have to find the way to bury their hydrophobic regions into the lipid bilayer. Once within the membrane, the hydrophobic helices of viroporins interact with each other to form higher ordered structures required to correctly perform their porating activities. This two-step process resembles the two-stage model proposed for membrane protein folding by Engelman and Poppot. In this review we use the membrane protein folding model as a leading thread to analyze the mechanism and forces behind the membrane insertion and folding of viroporins. We start by describing the transmembrane segment architecture of viroporins, including the number and sequence characteristics of their membrane-spanning domains. Next, we connect the differences found among viroporin families to their viral genome organization, and finalize focusing on the pathways used by viroporins in their way to the membrane and on the transmembrane helix-helix interactions required to achieve proper folding and assembly.

  6. Complex Dynamics of a Continuous Bertrand Duopoly Game Model with Two-Stage Delay

    Directory of Open Access Journals (Sweden)

    Junhai Ma

    2016-07-01

    Full Text Available This paper studies a continuous Bertrand duopoly game model with two-stage delay. Our aim is to investigate the influence of delay and weight on the complex dynamic characteristics of the system. We obtain the bifurcation point of the system respect to delay parameter by calculating. In addition, the dynamic properties of the system are simulated by power spectrum, attractor, bifurcation diagram, the largest Lyapunov exponent, 3D surface chart, 4D Cubic Chart, 2D parameter bifurcation diagram, and 3D parameter bifurcation diagram. The results show that the stability of the system depends on the delay and weight, in order to maintain stability of price and ensure the firm profit, the firms must control the parameters in the reasonable region. Otherwise, the system will lose stability, and even into chaos, which will cause fluctuations in prices, the firms cannot be profitable. Finally, the chaos control of the system is carried out by a control strategy of the state variables’ feedback and parameter variation, which effectively avoid the damage of chaos to the economic system. Therefore, the results of this study have an important practical significance to make decisions with multi-stage delay for oligopoly firms.

  7. A two-stage flexible flow-shop scheduling problem with m identical parallel machines on one stage and a batch processor on the other stage

    Institute of Scientific and Technical Information of China (English)

    HE Long-min; SUN Shi-jie; CHENG Ming-bao

    2008-01-01

    This paper considers a hybrid two-stage flow-shop scheduling problem with m identical parallel machineson one stage and a batch processor on the other stage.The processing time of job Jj on any of m identical parallel machines is aj≡a(j∈N),and the processing time of job Jj is bj(j∈N)on a batch processor M.We take makespan(Cmax)as our minimization objective.In this paper,for the problem of FSMP-BI(m identical parallel machines on the first stage and a batch processor on the second stage),based on the algorithm given by Sung and Choung for the problem of l I rj,BI I Cmax under the constraint of the given processing sequence,we develop an optimal dynamic programming Algorithm H1 for it in max{O(nlogn),O(nB)} time.A max{O(nlogn),O(nB)} time symmetric Algorithm H2 is given then for the problem of BI-FSMP(a batch processor on the first stage and m identical parallel machines on the second stage).

  8. Two-Stage orders sequencing system for mixed-model assembly

    Science.gov (United States)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  9. CFD Modelling of Bore Erosion in Two-Stage Light Gas Guns

    Science.gov (United States)

    Bogdanoff, D. W.

    1998-01-01

    A well-validated quasi-one-dimensional computational fluid dynamics (CFD) code for the analysis of the internal ballistics of two-stage light gas guns is modified to explicitly calculate the ablation of steel from the gun bore and the incorporation of the ablated wall material into the hydrogen working cas. The modified code is used to model 45 shots made with the NASA Ames 0.5 inch light gas gun over an extremely wide variety of gun operating conditions. Good agreement is found between the experimental and theoretical piston velocities (maximum errors of +/-2% to +/-6%) and maximum powder pressures (maximum errors of +/-10% with good igniters). Overall, the agreement between the experimental and numerically calculated gun erosion values (within a factor of 2) was judged to be reasonably good, considering the complexity of the processes modelled. Experimental muzzle velocities agree very well (maximum errors of 0.5-0.7 km/sec) with theoretical muzzle velocities calculated with loading of the hydrogen gas with the ablated barrel wall material. Comparison of results for pump tube volumes of 100%, 60% and 40% of an initial benchmark value show that, at the higher muzzle velocities, operation at 40% pump tube volume produces much lower hydrogen loading and gun erosion and substantially lower maximum pressures in the gun. Large muzzle velocity gains (2.4-5.4 km/sec) are predicted upon driving the gun harder (that is, upon using, higher powder loads and/or lower hydrogen fill pressures) when hydrogen loading is neglected; much smaller muzzle velocity gains (1.1-2.2 km/sec) are predicted when hydrogen loading is taken into account. These smaller predicted velocity gains agree well with those achieved in practice. CFD snapshots of the hydrogen mass fraction, density and pressure of the in-bore medium are presented for a very erosive shot.

  10. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    Energy Technology Data Exchange (ETDEWEB)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)

    2015-10-15

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing

  11. Modifications of some simple One-stage Randomized Response Models to Two-stage in complex surveys

    Directory of Open Access Journals (Sweden)

    Mohammad Rafiq

    2016-06-01

    Full Text Available Warner (1965 introduced a Randomized Response Technique (RRT to minimize bias due to non- response or false response. Thereafter, several researchers have made significant contribution in the development and modification of different Randomized Response Models. We have modified a few one-stage Simple Randomized Response Models to two-stage randomized response models in complex surveys and found that our developed models are more efficient.

  12. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  13. Assessing vanadium and arsenic exposure of people living near a petrochemical complex with two-stage dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Chio, Chia-Pin; Yuan, Tzu-Hsuen [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan (China); Shie, Ruei-Hao [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan (China); Green Energy and Environment Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan (China); Chan, Chang-Chuan, E-mail: ccchan@ntu.edu.tw [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan (China)

    2014-04-01

    Highlights: • Two-stage dispersion models can estimate exposures to hazardous air pollutants. • Spatial distribution of V levels is derived for sources without known emission rates. • A distance-to-source gradient is found for V levels from a petrochemical complex. • Two-stage dispersion is useful for modeling air pollution in resource-limited areas. - Abstract: The goal of this study is to demonstrate that it is possible to construct a two-stage dispersion model empirically for the purpose of estimating air pollution levels in the vicinity of petrochemical plants. We studied oil refineries and coal-fired power plants in the No. 6 Naphtha Cracking Complex, an area of 2,603-ha situated on the central west coast of Taiwan. The pollutants targeted were vanadium (V) from oil refineries and arsenic (As) from coal-fired power plants. We applied a backward fitting method to determine emission rates of V and As, with 192 PM{sub 10} filters originally collected between 2009 and 2012. Our first-stage model estimated emission rates of V and As (median and 95% confidence intervals at 0.0202 (0.0040–0.1063) and 0.1368 (0.0398–0.4782) g/s, respectively. In our second stage model, the predicted zone-average concentrations showed a strong correlation with V, but a poor correlation with As. Our findings show that two-stage dispersion models are relatively precise for estimating V levels at residents’ addresses near the petrochemical complex, but they did not work as well for As levels. In conclusion, our model-based approach can be widely used for modeling exposure to air pollution from industrial areas in countries with limited resources.

  14. A multi-regional two-stage cournot model for analyzing competition in the German electricity market

    OpenAIRE

    Ellersdorfer, Ingo

    2005-01-01

    In this paper a model based analysis of competition in the German electricity market is presented. Applying a multi-regional two-stage model, which captures interregional transmission constraints and the impact of forward trading on spot market decisions, potential for exercising market power of the four dominant electricity producers has been found. Assuming Cournot behavior in the spot market, it has been shown to what extent network reinforcement between Germany and some of its neighboring...

  15. New numerical model for thermal quenching mechanism in quartz based on two-stage thermal stimulation of thermoluminescence model

    Directory of Open Access Journals (Sweden)

    Ahmed Kadari

    2015-11-01

    Full Text Available The effect of thermal quenching plays an important role in the thermoluminescence (TL of quartz on which many applications of TL are based. The studies of the stability and kinetics of the 325 °C thermoluminescence peak in quartz are described by Wintle (1975, which show the occurrence of thermal quenching, the decrease in luminescence efficiency with rise in temperature. The thermal quenching of thermoluminescence in quartz was studied experimentally by several authors. The simulations work presented in the literature is based on the single-stage thermal stimulation model of thermoluminescence, in spite of that the mechanisms of this effect remain incomplete. This paper presents a new numerical model for thermal quenching in quartz, using the previously published two-stage thermal stimulation of thermoluminescence model.

  16. 基于Hadoop二阶段并行模糊c-Means聚类算法%HADOOP-BASED TWO-STAGE PARALLEL FUZZY C-MEANS CLUSTERING ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    胡吉朝; 黄红艳

    2016-01-01

    Aiming at the problem of too high occupancy of communication time and limited applying value of the algorithm under the mechanism of Mapreduce,we put forward a Hadoop-based two-stage parallel c-Means clustering algorithm to deal with the problem of extra-large data classification.First,we improved the MPI communication management method in Mapreduce mechanism,and used membership management protocol mode to realise the synchronisation of members management and Mapreduce reducing operation.Secondly, we implemented typical individuals group reducing operation instead of global individual reducing operation,and defined the two-stage buffer algorithm.Finally,through the buffer in first stage we further reduced the data amount of Mapreduce operation in second stage,and reduced the negative impact brought about by big data on the algorithm as much as possible.Based on this,we carried out the simulation by using artificial big data test set and KDD CUP 99 invasion test data.Experimental result showed that the algorithm could both guarantee the clustering precision requirement and speed up effectively the operation efficiency of algorithm.%针对Mapreduce机制下算法通信时间占用比过高,实际应用价值受限的情况,提出基于Hadoop二阶段并行c-Means聚类算法用来解决超大数据的分类问题。首先,改进Mapreduce机制下的MPI通信管理方法,采用成员管理协议方式实现成员管理与Mapreduce降低操作的同步化;其次,实行典型个体组降低操作代替全局个体降低操作,并定义二阶段缓冲算法;最后,通过第一阶段的缓冲进一步降低第二阶段Mapreduce操作的数据量,尽可能降低大数据带来的对算法负面影响。在此基础上,利用人造大数据测试集和KDD CUP 99入侵测试集进行仿真,实验结果表明,该算法既能保证聚类精度要求又可有效加快算法运行效率。

  17. A queuing-theory-based interval-fuzzy robust two-stage programming model for environmental management under uncertainty

    Science.gov (United States)

    Sun, Y.; Li, Y. P.; Huang, G. H.

    2012-06-01

    In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.

  18. Parallel Software Model Checking

    Science.gov (United States)

    2015-01-08

    JAN 2015 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Parallel Software Model Checking 5a. CONTRACT NUMBER 5b. GRANT NUMBER...AND ADDRESS(ES) Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9...3: ∧ ≥ 10 ∧ ≠ 10 ⇒ : Parallel Software Model Checking Team Members Sagar Chaki, Arie Gurfinkel

  19. A manufacturing quality assessment model based-on two stages interval type-2 fuzzy logic

    Science.gov (United States)

    Purnomo, Muhammad Ridwan Andi; Helmi Shintya Dewi, Intan

    2016-01-01

    This paper presents the development of an assessment models for manufacturing quality using Interval Type-2 Fuzzy Logic (IT2-FL). The proposed model is developed based on one of building block in sustainable supply chain management (SSCM), which is benefit of SCM, and focuses more on quality. The proposed model can be used to predict the quality level of production chain in a company. The quality of production will affect to the quality of product. Practically, quality of production is unique for every type of production system. Hence, experts opinion will play major role in developing the assessment model. The model will become more complicated when the data contains ambiguity and uncertainty. In this study, IT2-FL is used to model the ambiguity and uncertainty. A case study taken from a company in Yogyakarta shows that the proposed manufacturing quality assessment model can work well in determining the quality level of production.

  20. A Two-Stage Approach to Synthesizing Covariance Matrices in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W. L.; Chan, Wai

    2009-01-01

    Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…

  1. A Two-Stage Approach to Synthesizing Covariance Matrices in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W. L.; Chan, Wai

    2009-01-01

    Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…

  2. A Two-Stage Combining Classifier Model for the Development of Adaptive Dialog Systems.

    Science.gov (United States)

    Griol, David; Iglesias, José Antonio; Ledezma, Agapito; Sanchis, Araceli

    2016-02-01

    This paper proposes a statistical framework to develop user-adapted spoken dialog systems. The proposed framework integrates two main models. The first model is used to predict the user's intention during the dialog. The second model uses this prediction and the history of dialog up to the current moment to predict the next system response. This prediction is performed with an ensemble-based classifier trained for each of the tasks considered, so that a better selection of the next system can be attained weighting the outputs of these specialized classifiers. The codification of the information and the definition of data structures to store the data supplied by the user throughout the dialog makes the estimation of the models from the training data and practical domains manageable. We describe our proposal and its application and detailed evaluation in a practical spoken dialog system.

  3. Two-stage Turing model for generating pigment patterns on the leopard and the jaguar

    Science.gov (United States)

    Liu, R. T.; Liaw, S. S.; Maini, P. K.

    2006-07-01

    Based on the results of phylogenetic analysis, which showed that flecks are the primitive pattern of the felid family and all other patterns including rosettes and blotches develop from it, we construct a Turing reaction-diffusion model which generates spot patterns initially. Starting from this spotted pattern, we successfully generate patterns of adult leopards and jaguars by tuning parameters of the model in the subsequent phase of patterning.

  4. A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion.

    Science.gov (United States)

    Sun, Peng; Landy, Michael S

    2016-11-02

    Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the

  5. Assessing vanadium and arsenic exposure of people living near a petrochemical complex with two-stage dispersion models.

    Science.gov (United States)

    Chio, Chia-Pin; Yuan, Tzu-Hsuen; Shie, Ruei-Hao; Chan, Chang-Chuan

    2014-04-30

    The goal of this study is to demonstrate that it is possible to construct a two-stage dispersion model empirically for the purpose of estimating air pollution levels in the vicinity of petrochemical plants. We studied oil refineries and coal-fired power plants in the No. 6 Naphtha Cracking Complex, an area of 2,603-ha situated on the central west coast of Taiwan. The pollutants targeted were vanadium (V) from oil refineries and arsenic (As) from coal-fired power plants. We applied a backward fitting method to determine emission rates of V and As, with 192 PM10 filters originally collected between 2009 and 2012. Our first-stage model estimated emission rates of V and As (median and 95% confidence intervals at 0.0202 (0.0040-0.1063) and 0.1368 (0.0398-0.4782) g/s, respectively. In our second stage model, the predicted zone-average concentrations showed a strong correlation with V, but a poor correlation with As. Our findings show that two-stage dispersion models are relatively precise for estimating V levels at residents' addresses near the petrochemical complex, but they did not work as well for As levels. In conclusion, our model-based approach can be widely used for modeling exposure to air pollution from industrial areas in countries with limited resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Simplified mechanistic model for the two-stage anaerobic degradation of sewage sludge.

    Science.gov (United States)

    Donoso-Bravo, Andrés; Pérez-Elvira, Sara; Fdz-Polanco, Fernando

    2015-01-01

    Two-phase anaerobic systems are being increasingly implemented for the treatment of both sewage sludge and organic fraction of municipal solid waste. Despite the good amount of mathematical models in anaerobic digestion, few have been applied in two-phase systems. In this study, a three-reaction mechanistic model has been developed, implemented and validated by using experimental data from a long-term anaerobic two-phase (TPAD) digester treating sewage sludge. A sensitivity analysis shows that the most influential parameters of the model are the ones related to the hydrolysis reaction and the activity of methanogens in the thermophilic reactor. The calibration procedure highlights a noticeable growth rate of the thermophilic methanogens throughout the evaluation period. Overall, all the measured variables are properly predicted by the model during both the calibration and the cross-validation periods. The model's representation of the organic matter behaviour is quite good. The most important disagreements are observed for the biogas production especially during the validation period. The whole application procedure underlines the ability of the model to properly predict the behaviour of this bioprocess.

  7. Two-Stage Analysis on Models for Quantitative Differentiation of Early-Pathological Bladder States

    Directory of Open Access Journals (Sweden)

    Nina Kalyagina

    2014-01-01

    Full Text Available A mathematical simulation method was developed for visualization of the diffuse reflected light on a surface of 3-layered models of urinary bladder wall. Five states, from normal to precancerous, of the urinary bladder epithelium were simulated. With the use of solutions of classical electrodynamics equations, scattering coefficients μs and asymmetry parameters g of the bladder epithelium were found in order to perform Monte Carlo calculations. The results, compared with the experimental studies, has revealed the influence of the changes in absorption and scattering properties on diffuse-reflectance signal distributions on the surfaces of the modelled media.

  8. A two-stage model for a day-ahead paratransit planning problem

    NARCIS (Netherlands)

    Cremers, Maria L. A. G.; Klein Haneveld, Willem K.; van der Vlerk, Maarten H.

    We consider a dynamic planning problem for paratransit transportation. The focus is on a decision to take one day ahead: which requests to serve with own vehicles, and which requests to subcontract to taxis? We call this problem the day-ahead paratransit planning problem. The developed model is a

  9. A two-stage model for a day-ahead paratransit planning problem

    NARCIS (Netherlands)

    Cremers, Maria L. A. G.; Klein Haneveld, Willem K.; van der Vlerk, Maarten H.

    2009-01-01

    We consider a dynamic planning problem for paratransit transportation. The focus is on a decision to take one day ahead: which requests to serve with own vehicles, and which requests to subcontract to taxis? We call this problem the day-ahead paratransit planning problem. The developed model is a no

  10. A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system.

    Science.gov (United States)

    Anastasio, Thomas J; Patton, Paul E

    2003-07-30

    Multisensory enhancement (MSE) is the augmentation of the response to sensory stimulation of one modality by stimulation of a different modality. It has been described for multisensory neurons in the deep superior colliculus (DSC) of mammals, which function to detect, and direct orienting movements toward, the sources of stimulation (targets). MSE would seem to improve the ability of DSC neurons to detect targets, but many mammalian DSC neurons are unimodal. MSE requires descending input to DSC from certain regions of parietal cortex. Paradoxically, the descending projections necessary for MSE originate from unimodal cortical neurons. MSE, and the puzzling findings associated with it, can be simulated using a model of the corticotectal system. In the model, a network of DSC units receives primary sensory input that can be augmented by modulatory cortical input. Connection weights from primary and modulatory inputs are trained in stages one (Hebb) and two (Hebb-anti-Hebb), respectively, of an unsupervised two-stage algorithm. Two-stage training causes DSC units to extract information concerning simulated targets from their inputs. It also causes the DSC to develop a mixture of unimodal and multisensory units. The percentage of DSC multisensory units is determined by the proportion of cross-modal targets and by primary input ambiguity. Multisensory DSC units develop MSE, which depends on unimodal modulatory connections. Removal of the modulatory influence greatly reduces MSE but has little effect on DSC unit responses to stimuli of a single modality. The correspondence between model and data suggests that two-stage training captures important features of self-organization in the real corticotectal system.

  11. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  12. Probabilistic Quantitative Precipitation Forecasting using a Two-Stage Spatial Model

    Science.gov (United States)

    2008-04-08

    decision via a truncation; the second process drives precipitation amounts via an anamorphosis or transformation function (Chilès and Delfiner 1999, p...spatially varying anamorphosis or transformation function (Chilès and Delfiner 1999, p. 381). The anamorphosis has the advantage of retaining the appro... Delfiner , P. (1999), Geostatistics: Modeling Spatial Uncertainty, Wiley, 695 pp. Diebold, F. X., Gunther, T. A., and Tay, A. S. (1998), “Evaluating density

  13. Two-stage model in perceptual learning: toward a unified theory.

    Science.gov (United States)

    Shibata, Kazuhisa; Sagi, Dov; Watanabe, Takeo

    2014-05-01

    Training or exposure to a visual feature leads to a long-term improvement in performance on visual tasks that employ this feature. Such performance improvements and the processes that govern them are called visual perceptual learning (VPL). As an ever greater volume of research accumulates in the field, we have reached a point where a unifying model of VPL should be sought. A new wave of research findings has exposed diverging results along three major directions in VPL: specificity versus generalization of VPL, lower versus higher brain locus of VPL, and task-relevant versus task-irrelevant VPL. In this review, we propose a new theoretical model that suggests the involvement of two different stages in VPL: a low-level, stimulus-driven stage, and a higher-level stage dominated by task demands. If experimentally verified, this model would not only constructively unify the current divergent results in the VPL field, but would also lead to a significantly better understanding of visual plasticity, which may, in turn, lead to interventions to ameliorate diseases affecting vision and other pathological or age-related visual and nonvisual declines.

  14. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    Science.gov (United States)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  15. A Mathematical Programming Model for Tactical Planning with Set-up Continuity in a Two-stage Ceramic Firm

    Directory of Open Access Journals (Sweden)

    David Pérez Perales

    2016-07-01

    Full Text Available It is known that capacity issues in tactical production plans in a hierarchical context are relevant since its inaccurate determination may lead to unrealistic or simply non-feasible plans at the operational level. Semi-continuous industrial processes, such as ceramic ones, often imply large setups and their consideration is crucial for accurate capacity estimation. However, in most of production planning models developed in a hierarchical context at this tactical (aggregated level, setup changes are not explicitly considered. Their consideration includes not only decisions about lot sizing of production, but also allocation, known as Capacitated Lot Sizing and Loading Problem (CLSLP. However, CLSLP does not account for set-up continuity, specially important in contexts with lengthy and costly set-ups and where product families minimum run length are similar to planning periods. In this work, a mixed integer linear programming (MILP model for a two stage ceramic firm which accounts for lot sizing and loading decisions including minimum lot-sizes and set-up continuity between two consecutive periods is proposed. Set-up continuity inclusion is modelled just considering which product families are produced at the beginning and at the end of each period of time, and not the complete sequence. The model is solved over a simplified two-stage real-case within a Spanish ceramic firm. Obtained results confirm its validity.

  16. Two-stage dissipation in a superconducting microbridge: experiment and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Del Rio, L; Altshuler, E [Superconductivity Laboratory, IMRE-Physics Faculty, University of Havana, 10400 Havana (Cuba); Niratisairak, S; Haugen, Oe; Johansen, T H [Department of Physics, University of Oslo, 0316 Oslo (Norway); Davidson, B A [INFM-TASC Area Science Park, Basovizza (Italy); Testa, G; Sarnelli, E [Cybernetic Institute of the CNR, Via Campi Flegrei 34, 80078, Pozzuoli (Italy)

    2010-08-15

    Using fluorescent microthermal imaging we have investigated the origin of 'two-step' behavior in I-V curves for a current-carrying YBa{sub 2}Cu{sub 3}O{sub x} superconducting bridge. High resolution temperature maps reveal that as the applied current increases the first step in the voltage corresponds to local dissipation (hot spot), whereas the second step is associated with the onset of global dissipation throughout the entire bridge. A quantitative explanation of the experimental results is provided by a simple model for an inhomogeneous superconductor, assuming that the hot spot nucleates at a location with slightly depressed superconducting properties.

  17. A GRASP model in network design for two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Hassan Javanshir

    2011-04-01

    Full Text Available We consider a capacitated facility location problem (CFLP which contains a production facility and distribution centers (DCs supplying retailers' demand. The primary purpose is to locate distribution centres in the network and the objective is the minimization of the sum of fixed facility location, pipeline inventory, safety stock and lost sales. We use Greedy randomized adaptive search procedures (GRASP to solve the model. The preliminary results indicate that the proposed method of this paper could provide competitive results in reasonable amount time.

  18. Two-stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human‐Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2‐stages Hidden Markov Model. The 1st HMM is to recognize the prime command‐like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  19. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  20. ADM1-based modeling of methane production from acidified sweet sorghum extract in a two stage process.

    Science.gov (United States)

    Antonopoulou, Georgia; Gavala, Hariklia N; Skiadas, Ioannis V; Lyberatos, Gerasimos

    2012-02-01

    The present study focused on the application of the Anaerobic Digestion Model 1 on the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15, and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation less than 5% for all steady states. Therefore, the ADM1 is a valuable tool for process design in the case of a two-stage anaerobic process as well.

  1. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    Science.gov (United States)

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  2. Two-stage model of radon-induced malignant lung tumors in rats: effects of cell killing

    Science.gov (United States)

    Luebeck, E. G.; Curtis, S. B.; Cross, F. T.; Moolgavkar, S. H.

    1996-01-01

    A two-stage stochastic model of carcinogenesis is used to analyze lung tumor incidence in 3750 rats exposed to varying regimens of radon carried on a constant-concentration uranium ore dust aerosol. New to this analysis is the parameterization of the model such that cell killing by the alpha particles could be included. The model contains parameters characterizing the rate of the first mutation, the net proliferation rate of initiated cells, the ratio of the rates of cell loss (cell killing plus differentiation) and cell division, and the lag time between the appearance of the first malignant cell and the tumor. Data analysis was by standard maximum likelihood estimation techniques. Results indicate that the rate of the first mutation is dependent on radon and consistent with in vitro rates measured experimentally, and that the rate of the second mutation is not dependent on radon. An initial sharp rise in the net proliferation rate of initiated cell was found with increasing exposure rate (denoted model I), which leads to an unrealistically high cell-killing coefficient. A second model (model II) was studied, in which the initial rise was attributed to promotion via a step function, implying that it is due not to radon but to the uranium ore dust. This model resulted in values for the cell-killing coefficient consistent with those found for in vitro cells. An "inverse dose-rate" effect is seen, i.e. an increase in the lifetime probability of tumor with a decrease in exposure rate. This is attributed in large part to promotion of intermediate lesions. Since model II is preferable on biological grounds (it yields a plausible cell-killing coefficient), such as uranium ore dust. This analysis presents evidence that a two-stage model describes the data adequately and generates hypotheses regarding the mechanism of radon-induced carcinogenesis.

  3. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    Science.gov (United States)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  4. An improved two stages dynamic programming/artificial neural network solution model to the unit commitment of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Abbasy, N.H. [College of Technological Studies, Shuwaikh (Kuwait); Elfayoumy, M.K. [Univ. of Alexandria (Egypt). Dept. of Electrical Engineering

    1995-11-01

    An improved two stages solution model to the unit commitment of thermal units is developed in this paper. In the first stage a pre-schedule is generated using a high quality trained artificial neural net (ANN). A dynamic programming (DP) algorithm is implemented and applied in the second stage for the final determination of the commitment states. The developed solution model avoids the complications imposed by the generation of the variable window structure, proposed by other techniques. A unified approach for the treatment of the ANN is also developed in the paper. The validity of the proposed technique is proved via numerical applications to both sample and small practical power systems. 12 refs, 9 tabs

  5. Acquisition of nonlinear forward optics in generative models: two-stage "downside-up" learning for occluded vision.

    Science.gov (United States)

    Tajima, Satohiro; Watanabe, Masataka

    2011-03-01

    We propose a two-stage learning method which implements occluded visual scene analysis into a generative model, a type of hierarchical neural network with bi-directional synaptic connections. Here, top-down connections simulate forward optics to generate predictions for sensory driven low-level representation, whereas bottom-up connections function to send the prediction error, the difference between the sensory based and the predicted low-level representation, to higher areas. The prediction error is then used to update the high-level representation to obtain better agreement with the visual scene. Although the actual forward optics is highly nonlinear and the accuracy of simulated forward optics is crucial for these types of models, the majority of previous studies have only investigated linear and simplified cases of forward optics. Here we take occluded vision as an example of nonlinear forward optics, where an object in front completely masks out the object behind. We propose a two-staged learning method inspired by the staged development of infant visual capacity. In the primary learning stage, a minimal set of object basis is acquired within a linear generative model using the conventional unsupervised learning scheme. In the secondary learning stage, an auxiliary multi-layer neural network is trained to acquire nonlinear forward optics by supervised learning. The important point is that the high-level representation of the linear generative model serves as the input and the sensory driven low-level representation provides the desired output. Numerical simulations show that occluded visual scene analysis can indeed be implemented by the proposed method. Furthermore, considering the format of input to the multi-layer network and analysis of hidden-layer units leads to the prediction that whole object representation of partially occluded objects, together with complex intermediate representation as a consequence of nonlinear transformation from non-occluded to

  6. Two-Stage Nerve Graft in Severe Scar: A Time-Course Study in a Rat Model

    Directory of Open Access Journals (Sweden)

    Shayan Zadegan

    2015-04-01

    According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

  7. Mathematical modeling of a continuous alcoholic fermentation process in a two-stage tower reactor cascade with flocculating yeast recycle.

    Science.gov (United States)

    de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo

    2015-03-01

    Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

  8. Modelling the removal of volatile pollutants under transient conditions in a two-stage bioreactor using artificial neural networks.

    Science.gov (United States)

    López, M Estefanía; Rene, Eldon R; Boger, Zvi; Veiga, María C; Kennes, Christian

    2017-02-15

    A two-stage biological waste gas treatment system consisting of a first stage biotrickling filter (BTF) and second stage biofilter (BF) was tested for the removal of a gas-phase methanol (M), hydrogen sulphide (HS) and α-pinene (P) mixture. The bioreactors were tested with two types of shock loads, i.e., long-term (66h) low to medium concentration loads, and short-term (12h) low to high concentration loads. M and HS were removed in the BTF, reaching maximum elimination capacities (ECmax) of 684 and 33 gm(-3)h(-1), respectively. P was removed better in the second stage BF with an ECmax of 130 gm(-3)h(-1). The performance was modelled using two multi-layer perceptrons (MLPs) that employed the error backpropagation with momentum algorithm, in order to predict the removal efficiencies (RE, %) of methanol (REM), hydrogen sulphide (REHS) and α-pinene (REP), respectively. It was observed that, a MLP with the topology 3-4-2 was able to predict REM and REHS in the BTF, while a topology of 3-3-1 was able to approximate REP in the BF. The results show that artificial neural network (ANN) based models can effectively be used to model the transient-state performance of bioprocesses treating gas-phase pollutants.

  9. Stability and multiattractor dynamics of a toggle switch based on a two-stage model of stochastic gene expression.

    Science.gov (United States)

    Strasser, Michael; Theis, Fabian J; Marr, Carsten

    2012-01-04

    A toggle switch consists of two genes that mutually repress each other. This regulatory motif is active during cell differentiation and is thought to act as a memory device, being able to choose and maintain cell fate decisions. Commonly, this switch has been modeled in a deterministic framework where transcription and translation are lumped together. In this description, bistability occurs for transcription factor cooperativity, whereas autoactivation leads to a tristable system with an additional undecided state. In this contribution, we study the stability and dynamics of a two-stage gene expression switch within a probabilistic framework inspired by the properties of the Pu/Gata toggle switch in myeloid progenitor cells. We focus on low mRNA numbers, high protein abundance, and monomeric transcription-factor binding. Contrary to the expectation from a deterministic description, this switch shows complex multiattractor dynamics without autoactivation and cooperativity. Most importantly, the four attractors of the system, which only emerge in a probabilistic two-stage description, can be identified with committed and primed states in cell differentiation. To begin, we study the dynamics of the system and infer the mechanisms that move the system between attractors using both the quasipotential and the probability flux of the system. Next, we show that the residence times of the system in one of the committed attractors are geometrically distributed. We derive an analytical expression for the parameter of the geometric distribution, therefore completely describing the statistics of the switching process and elucidate the influence of the system parameters on the residence time. Moreover, we find that the mean residence time increases linearly with the mean protein level. This scaling also holds for a one-stage scenario and for autoactivation. Finally, we study the implications of this distribution for the stability of a switch and discuss the influence of the

  10. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    Science.gov (United States)

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-05

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response.

  11. Simultaneous Estimation of Model State Variables and Observation and Forecast Biases Using a Two-Stage Hybrid Kalman Filter

    Science.gov (United States)

    Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.

    2013-01-01

    In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  12. Simultaneous estimation of model state variables and observation and forecast biases using a two-stage hybrid Kalman filter

    Directory of Open Access Journals (Sweden)

    V. R. N. Pauwels

    2013-04-01

    Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the Discrete Kalman Filter, and the state variables using the Ensemble Kalman Filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware Ensemble Kalman Filter. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  13. A two-stage approach in solving the state probabilities of the multi-queue M/G/1 model

    Science.gov (United States)

    Chen, Mu-Song; Yen, Hao-Wei

    2016-04-01

    The M/G/1 model is the fundamental basis of the queueing system in many network systems. Usually, the study of the M/G/1 is limited by the assumption of single queue and infinite capacity. In practice, however, these postulations may not be valid, particularly when dealing with many real-world problems. In this paper, a two-stage state-space approach is devoted to solving the state probabilities for the multi-queue finite-capacity M/G/1 model, i.e. q-M/G/1/Ki with Ki buffers in the ith queue. The state probabilities at departure instants are determined by solving a set of state transition equations. Afterward, an embedded Markov chain analysis is applied to derive the state probabilities with another set of state balance equations at arbitrary time instants. The closed forms of the state probabilities are also presented with theorems for reference. Applications of Little's theorem further present the corresponding results for queue lengths and average waiting times. Simulation experiments have demonstrated the correctness of the proposed approaches.

  14. FunSAV: predicting the functional effect of single amino acid variants using a two-stage random forest model.

    Directory of Open Access Journals (Sweden)

    Mingjun Wang

    Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.

  15. Prognostic meta-signature of breast cancer developed by two-stage mixture modeling of microarray data

    Directory of Open Access Journals (Sweden)

    Ghosh Debashis

    2004-12-01

    Full Text Available Abstract Background An increasing number of studies have profiled tumor specimens using distinct microarray platforms and analysis techniques. With the accumulating amount of microarray data, one of the most intriguing yet challenging tasks is to develop robust statistical models to integrate the findings. Results By applying a two-stage Bayesian mixture modeling strategy, we were able to assimilate and analyze four independent microarray studies to derive an inter-study validated "meta-signature" associated with breast cancer prognosis. Combining multiple studies (n = 305 samples on a common probability scale, we developed a 90-gene meta-signature, which strongly associated with survival in breast cancer patients. Given the set of independent studies using different microarray platforms which included spotted cDNAs, Affymetrix GeneChip, and inkjet oligonucleotides, the individually identified classifiers yielded gene sets predictive of survival in each study cohort. The study-specific gene signatures, however, had minimal overlap with each other, and performed poorly in pairwise cross-validation. The meta-signature, on the other hand, accommodated such heterogeneity and achieved comparable or better prognostic performance when compared with the individual signatures. Further by comparing to a global standardization method, the mixture model based data transformation demonstrated superior properties for data integration and provided solid basis for building classifiers at the second stage. Functional annotation revealed that genes involved in cell cycle and signal transduction activities were over-represented in the meta-signature. Conclusion The mixture modeling approach unifies disparate gene expression data on a common probability scale allowing for robust, inter-study validated prognostic signatures to be obtained. With the emerging utility of microarrays for cancer prognosis, it will be important to establish paradigms to meta

  16. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu

    2016-08-10

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate the ignition delay of n-dodecane and n-heptane in a constant volume combustion chamber (CVCC), which is archived in the engine combustion network (ECN) library (http://www.ca.sandia.gov/ecn). A detailed chemical kinetic model for gasoline surrogates from the Lawrence Livermore National Laboratory (LLNL) was utilized for the simulation of n-heptane and isooctane. Additional simulations were performed using an optimized gasoline surrogate mechanism from RWTH Aachen University. Validations of the simulated data were also performed with experimental results from an IQT at KAUST. For simulation of n-dodecane in the CVCC, two n-dodecane kinetic models from the literature were utilized. The primary aim of this study is to test the ability of TSL to replicate ignition timings in the IQT and the CVCC. The agreement between the model and the experiment is acceptable except for isooctane in the IQT and n-heptane and n-dodecane in the CVCC. The ability of the simulations to replicate observable trends in ignition delay times with regard to changes in ambient temperature and pressure allows the model to provide insights into the reactions contributing towards ignition. Thus, the TSL model was further employed to investigate the physical and chemical processes responsible for controlling the overall ignition under various conditions. The effects of exothermicity, ambient pressure, and ambient oxygen concentration on first stage ignition were also studied. Increasing ambient pressure and oxygen concentration was found to shorten the overall ignition delay time, but does not affect the timing of the first stage ignition. Additionally, the temperature at the end of the first stage ignition was found to increase at higher ambient pressure

  17. A High-Resolution Two-Stage Satellite Model to Estimate PM2.5 Concentrations in China

    Science.gov (United States)

    Liu, Y.; Ma, Z.; Hu, X.; Yang, K.

    2014-12-01

    With the rapid economic development and urbanization, severe and widespread PM2.5 pollution in China has attracted nationwide attention. Study of the health impact of PM2.5 exposure has been hindered, however, by the limited coverage of ground measurements from recently established regulatory monitoring networks. Estimating ground-level PM2.5 from satellite remote sensing is a promising new method to evaluate the spatial and temporal patterns of PM2.5 exposure. We developed a two-stage spatial statistical model to estimate daily mean PM2.5 concentrations at 10 km resolution in 2013 in China using MODIS Collection 6 AOD, assimilated meteorology, population density, and land use parameters. A custom inverse variance weighting approach was developed to combine MODIS Dark Target (DT) and Deep Blue (DB) AOD to optimize coverage. Compared with the AERONET AOD measurements, our combined AOD (R2=0.80, mean bias = 0.07) performs similarly to MODIS' combined AOD (R2=0.81, mean bias =0.07), but has 90% greater coverage. We used the first-stage linear mixed effect model to represent the temporal variability of PM2.5 and the second-stage generalized additive model to represent its spatial contrast. The overall model cross-validation R2 and relative prediction error are 0.80 and 30%, respectively. PM2.5 levels exhibit strong seasonal patterns, with the highest national mean concentrations in winter (75 µg/m3) and the lowest in summer (30 µg/m3). Elevated annual mean PM2.5 levels are predicted in North China Plain and Sichuan Basin, with the maximum annual PM2.5 concentrations higher than 130 µg/m3 and 110 µg/m3, respectively. Our results also indicates that over 94% of the Chinese population lives in areas that exceed the WHO Air Quality Interim Target-1 standard (35 μg/m3). The exceptions include Taiwan, Hainan, Yunnan, Tibet, and North Inner Mongolia.

  18. Decentralized two-stage sewage treatment by chemical-biological flocculation combined with microalgae biofilm for nutrient immobilization in a roof installed parallel plate reactor.

    Science.gov (United States)

    Zamalloa, Carlos; Boon, Nico; Verstraete, Willy

    2013-02-01

    In this lab-scale study, domestic wastewater is subjected to a chemical biological adsorption (A-stage), followed by treatment in an innovative roof installed parallel plate microalgae biofilm reactor for nutrient immobilization (I-stage). The A-stage process was operated at a hydraulic retention time (HRT) of 1h and a solid retention time of 1day (FeSO(4) as flocculant). The I-stage, which consequently received the effluent of the A-stage process, was operated at an HRT of 1day and exposed to natural light. The overall system removed on average 74% of the total chemical oxygen demand, 82% of the total suspended solids, 67% of the total nitrogen and 96% of the total phosphorous in the wastewater. The design involves a relatively low capital and operating cost which is in the order of 0.5€/m(3) wastewater treated. These aspects suggest that the A/I process can be used as a decentralized domestic wastewater treatment system. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Modelling of Two-Stage Anaerobic Treating Wastewater from a Molasses-Based Ethanol Distillery with the IWA Anaerobic Digestion Model No.1

    OpenAIRE

    Kittikhun Taruyanon; Sarun Tejasen

    2010-01-01

    This paper presents the application of ADM1 model to simulate the dynamic behaviour of a two-stage anaerobic treatment process treating the wastewater generated from the ethanol distillery process. The laboratory-scale process comprised an anaerobic continuous stirred tank reactor (CSTR) and an upflow anaerobic sludge blanket (UASB) connecting in series, was used to treat wastewater from the ethanol distillery process. The CSTR and UASB hydraulic retention times (HRT) were 12 and 70 hours, re...

  20. Modelling of Two-Stage Anaerobic Treating Wastewater from a Molasses-Based Ethanol Distillery with the IWA Anaerobic Digestion Model No.1

    Directory of Open Access Journals (Sweden)

    Kittikhun Taruyanon

    2010-03-01

    Full Text Available This paper presents the application of ADM1 model to simulate the dynamic behaviour of a two-stage anaerobic treatment process treating the wastewater generated from the ethanol distillery process. The laboratory-scale process comprised an anaerobic continuous stirred tank reactor (CSTR and an upflow anaerobic sludge blanket (UASB connecting in series, was used to treat wastewater from the ethanol distillery process. The CSTR and UASB hydraulic retention times (HRT were 12 and 70 hours, respectively. The model was developed based on ADM1 basic structure and implemented with the simulation software AQUASIM. The simulated results were compared with measured data obtained from using the laboratory-scale two-stage anaerobic treatment process to treat wastewater. The sensitivity analysis identified maximum specific uptake rate (km and half-saturation constant (Ks of acetate degrader and sulfate reducing bacteria as the kinetic parameters which highly affected the process behaviour, which were further estimated. The study concluded that the model could predict the dynamic behaviour of a two-stage anaerobic treatment process treating the ethanol distillery process wastewater with varying strength of influents with reasonable accuracy.

  1. The construction of two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

  2. Two-stage formation model of the Junggar basin basement: Constraints to the growth style of Central Asian Orogenic Belt

    Science.gov (United States)

    He, Dengfa

    2016-04-01

    retro-arc or inter-arc basin belts from north to south, such as Santanghu-Suosuoquan-Emin, Wucaiwan-Dongdaohaizi-Mahu (Mahu block sunk as a bathyal basin during this phase) and Fukang-western well Pen1 sag accordingly. Thirdly, the closure of these retro-arc or inter-arc basins migrating gradually toward the south led to the collision and amalgamation between the above-mentioned island arcs during the Carboniferous, constituting the basic framework of the Junggar 'block'. Fourthly, the emplacement of large-scale mantle-derived magmas occurred in the latest Carboniferous to Early Permian. For instance, the well Mahu 5 penetrate the latest Carboniferous basalts with a thickness of over 20 m, and these mantle-derived magmas consolidated the above-mentioned island arc-collaged blocks. Therefore, the Junggar basin basement mainly comprises pre-Carboniferous collaged basement, and its formation is characterized by two-stage growth model, involving the Carboniferous lateral growth of island arcs and the latest Carboniferous to Early Permian vertical crustal growth related to emplacement and underplating of the mantle-derived magmas. In the Middle Permian, the Junggar Basin is dominated by a series of stable intra-continental sag basins from west to east, such as Mahu, Shawan, western Well Pen1, Dongdaohaizi-Wucaiwan-Dajing, Fukang-Jimusaer sag lake-basins and so on. The Middle Permian (e.g., Lower Wu'erhe, Lucaogou, and Pingdiquan Formations) thick source rocks developed in these basins, suggesting that the Junggar Basin had been entered 'intra-cratonic sag' basin evolution stage. Since then, no strong thermal tectonic event could result in crust growth. The present crustal thickness of Junggar Basin is 45-52 km, which was mainly formed before the latest Early Permian. Subsequently, the Junggar Basin experienced a rapid cooling process during the Late Permian to Triassic. These events constrain the formation timing of the Junggar basin basement to be before the latest Early

  3. Modeling and Implementing Two-Stage AdaBoost for Real-Time Vehicle License Plate Detection

    Directory of Open Access Journals (Sweden)

    Moon Kyou Song

    2014-01-01

    Full Text Available License plate (LP detection is the most imperative part of the automatic LP recognition system. In previous years, different methods, techniques, and algorithms have been developed for LP detection (LPD systems. This paper proposes to automatical detection of car LPs via image processing techniques based on classifier or machine learning algorithms. In this paper, we propose a real-time and robust method for LPD systems using the two-stage adaptive boosting (AdaBoost algorithm combined with different image preprocessing techniques. Haar-like features are used to compute and select features from LP images. The AdaBoost algorithm is used to classify parts of an image within a search window by a trained strong classifier as either LP or non-LP. Adaptive thresholding is used for the image preprocessing method applied to those images that are of insufficient quality for LPD. This method is of a faster speed and higher accuracy than most of the existing methods used in LPD. Experimental results demonstrate that the average LPD rate is 98.38% and the computational time is approximately 49 ms.

  4. A Computational Model for Two-stage 4K-Pulse Tube Cooler: Part I.Theoretical Model and Numerical Method

    Institute of Scientific and Technical Information of China (English)

    Y.L. Ju; A.T.A.M. de Waele

    2001-01-01

    A new mixed Eulerian-Lagrangian computational model for simulating and visualizing the internal processes and the variations of dynamic parameters of a two-stage pulse tube cooler (PTC) operating at 4 K-temperature region has been developed. We use the Lagrangian method, a set of moving grids, to follow the exact tracks of gas particles as they move with pressure oscillation in the pulse tube to avoid any numerical false diffusion. The Eulerian approach, a set of fixed computational grids, is used to simulate the variations of dynamic parameters in the regenerator. A variety of physical factors, such as real thermal properties of helium, multi-layered magnetic regenerative materials, pressure drop and heat transfer in the regenerator, and heat exchangers, are taken into account in this model. The present modeling is very effective for visualizing the internal physical processes in 4 K-pulse tube coolers.

  5. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  6. Consideration sets, intentions and the inclusion of "don't know" in a two-stage model for voter choice

    NARCIS (Netherlands)

    Paap, R; van Nierop, E; van Heerde, HJ; Wedel, M; Franses, PH; Alsem, KJ

    2005-01-01

    We present a statistical model for voter choice that incorporates a consideration set stage and final vote intention stage. The first stage involves a multivariate probit (MVP) model to describe the probabilities that a candidate or a party gets considered. The second stage of the model is a

  7. Dynamic Modeling and Control Studies of a Two-Stage Bubbling Fluidized Bed Adsorber-Reactor for Solid-Sorbent CO{sub 2} Capture

    Energy Technology Data Exchange (ETDEWEB)

    Modekurti, Srinivasarao; Bhattacharyya, Debangsu; Zitney, Stephen E.

    2013-07-31

    A one-dimensional, non-isothermal, pressure-driven dynamic model has been developed for a two-stage bubbling fluidized bed (BFB) adsorber-reactor for solid-sorbent carbon dioxide (CO{sub 2}) capture using Aspen Custom Modeler® (ACM). The BFB model for the flow of gas through a continuous phase of downward moving solids considers three regions: emulsion, bubble, and cloud-wake. Both the upper and lower reactor stages are of overflow-type configuration, i.e., the solids leave from the top of each stage. In addition, dynamic models have been developed for the downcomer that transfers solids between the stages and the exit hopper that removes solids from the bottom of the bed. The models of all auxiliary equipment such as valves and gas distributor have been integrated with the main model of the two-stage adsorber reactor. Using the developed dynamic model, the transient responses of various process variables such as CO{sub 2} capture rate and flue gas outlet temperatures have been studied by simulating typical disturbances such as change in the temperature, flowrate, and composition of the incoming flue gas from pulverized coal-fired power plants. In control studies, the performance of a proportional-integral-derivative (PID) controller, feedback-augmented feedforward controller, and linear model predictive controller (LMPC) are evaluated for maintaining the overall CO{sub 2} capture rate at a desired level in the face of typical disturbances.

  8. Structured model of bacterial growth and tests with activated sludge in a one-stage and two-stage chemostat

    NARCIS (Netherlands)

    Harder, A.

    1979-01-01

    A kinetic model for a growing culture of micro-organisms was developed that correlated the biochemical structure of cells with quantitative physiological behaviour. The three-compartment model was adequate for simulation of continuous, batch and transient experiments with activated sludge fed on van

  9. Efficient posterior exploration of a high-dimensional groundwater model from two-stage MCMC simulation and polynomial chaos expansion

    NARCIS (Netherlands)

    Laloy, E.; Rogiers, B.; Vrugt, J.A.; Mallants, D.; Jacques, D.

    2013-01-01

    This study reports on two strategies for accelerating posterior inference of a highly parameterized and CPU-demanding groundwater flow model. Our method builds on previous stochastic collocation approaches, e.g., Marzouk and Xiu (2009) and Marzouk and Najm (2009), and uses generalized polynomial

  10. Bank Mergers Performance and the Determinants of Singaporean Banks’ Efficiency: An Application of Two-Stage Banking Models

    Directory of Open Access Journals (Sweden)

    Fadzlan Sufian

    2007-01-01

    Full Text Available An event study window analysis of Data Envelopment Analysis (DEA is employed in this study to investigate the effect of mergers and acquisitions on Singaporean domestic banking groups’ efficiency. The results suggest that the mergers have resulted in a higher post-merger mean overall efficiency of Singaporean banking groups. However, from the scale efficiency perspective, our findings do not support further consolidation in the Singaporean banking sector. We find mixed evidence of the efficiency characteristics of the acquirers and targets banks. Hence, the findings do not fully support the hypothesis that a more (less efficient bank becomes the acquirer (target. In most cases, our results further confirm the hypothesis that the acquiring bank’s mean overall efficiency improves (deteriorates post-merger resulted from the merger with a more (less efficient bank. Tobit regression model is employed to determine factors affecting bank performance, and the results suggest that bank profitability has a significantly positive impact on bank efficiency, whereas poor loan quality has a significantly negative influence on bank performance.

  11. TEMPERATURE FIELD MODEL OF TWO-STAGE UNDERGROUND COAL GASIFICATION%两阶段煤炭地下气化温度场模型

    Institute of Scientific and Technical Information of China (English)

    杨兰和

    2001-01-01

    Two-stage underground coal gasification is an effective method which can produce water gas with high heating value,while the temperature is the key factor that determines on its producing process. On the basis of model test, the mathematical model for two dimensions, non-linear, unsteady temperature field is established through analyzing the distribution law of temperature field for combustion and gasification of coal seam in the stove, and outlining and treating the boundary conditions. A selection method of the model parameters has been introduced. The mathematical model is solved by the control volume method,and calculation results are analysed. The uniformity of calculation and real measurement value indicate that the numerical simulation of dynamic state temperature field for the coal seam medium in the gasification stove is correct.Thereby,it provides a necessarily theoretical base for further quantitative study on process of underground coal gasification.

  12. The Presentation of a Two Stages Model for an Optimum Operation of a Hybrid System of Wind-Pumped Storage Power Plant in the Power Market Environment

    Directory of Open Access Journals (Sweden)

    Mehdi Akbarpour

    2012-11-01

    Full Text Available In this study we present a new method in power market environment. One of the weaknesses in the utilization of wind units is severe dependence of output power level on wind. However, considering the high uncertainty in the prediction of wind speed and wind forecast unit production capacity is also having an error. Also, regarding to the uncontrollable generators of this type, it is better to use combined systems for utilization. This study presents a new model based on the a two stage for an optimum operation of a hybrid system of windpumped storage power plant in the power market environment that causes to provide a successful presentation condition in market environments for the producers of wind power. In the suggestive hybrid system of windpumped storage power plant of this study the modeling is done in two stages for the optimum presence in power market environment with the a most possible benefit. At first, the suggestive model is optimized regarding to uncertainty in the prediction of power price and producing the wind power, for presenting the suggestion of power to market, in order to gain the most benefit. At the second stage, the suggestive model is optimized regarding to uncertainty in producing wind power, in order to gain the most benefit and paying the least penalty for unbalancing in market for operation of the system. In this study, the Particle Swarm Optimization algorithm (PSO is used for optimization. At the end of a model example for applying the results of the proposed model will be examined and analyzes the results. Results show that the model is an appropriate method for the operation of this combined system in market environment.

  13. A Two-Stage Algorithm for the Closed-Loop Location-Inventory Problem Model Considering Returns in E-Commerce

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available Facility location and inventory control are critical and highly related problems in the design of logistics system for e-commerce. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Focusing on the existing problem in e-commerce logistics system, we formulate a closed-loop location-inventory problem model considering returned merchandise to minimize the total cost which is produced in both forward and reverse logistics networks. To solve this nonlinear mixed programming model, an effective two-stage heuristic algorithm named LRCAC is designed by combining Lagrangian relaxation with ant colony algorithm (AC. Results of numerical examples show that LRCAC outperforms ant colony algorithm (AC on optimal solution and computing stability. The proposed model is able to help managers make the right decisions under e-commerce environment.

  14. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  15. 消费者两阶段选择行为模型研究%An overview of consumers' two-stage choice behavior model

    Institute of Scientific and Technical Information of China (English)

    赵藜; 田澎; 李相勇

    2012-01-01

    This article surveys the research works on consumer two-stage choice behavior models. It introduces the theory, and discusses the main research contents as well as methods of these models in the context of brand marketing and revenue management. It also summarizes the main contributions and limitations of current researches. Finally it proposes some promising areas for future studies.%综述了消费者两阶段选择行为模型的相关文献,介绍了消费者两阶段选择的理论,探讨了在产品品牌营销与收益管理中消费者两阶段选择行为模型的主要研究内容与方法,总结了其学术贡献与局限,提出了未来研究方向.

  16. Efficient evaluation of small failure probability in high-dimensional groundwater contaminant transport modeling via a two-stage Monte Carlo method: FAILURE PROBABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Lin, Guang [Department of Mathematics and School of Mechanical Engineering, Purdue University, West Lafayette Indiana USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2017-03-01

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters. To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.

  17. Generalized Spatial Two Stage Least Squares Estimation of Spatial Autoregressive Models with Autoregressive Disturbances in the Presence of Endogenous Regressors and Many Instruments

    Directory of Open Access Journals (Sweden)

    Fei Jin

    2013-05-01

    Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.

  18. Optimizing the Two-Stage Supply Chain Inventory Model with Full Information Sharing and Two Backorders Costs Using Hybrid Geometric-Algebraic Method

    Directory of Open Access Journals (Sweden)

    Mohamed E. Seliaman

    2013-01-01

    Full Text Available We consider the case of a two-stage serial supply chain system. This supply chain system involves a single vendor who supplies a single buyer with a single product. The vendor’s production rate is assumed finite. In addition, the demand at the buyer is assumed deterministic. In order to coordinate their replenishment policies and jointly optimize their operational costs, the two supply chain partners fully share their relevant information. For this purpose, we develop an integrated inventory replenishment model assuming linear and fixed backorders costs. Then, we use a hybrid geometric-algebraic method to drive the optimal replenishment policy and the minimum supply chain total cost in a closed form.

  19. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  20. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  1. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  2. A two-stage support-vector-regression optimization model for municipal solid waste management - a case study of Beijing, China.

    Science.gov (United States)

    Dai, C; Li, Y P; Huang, G H

    2011-12-01

    In this study, a two-stage support-vector-regression optimization model (TSOM) is developed for the planning of municipal solid waste (MSW) management in the urban districts of Beijing, China. It represents a new effort to enhance the analysis accuracy in optimizing the MSW management system through coupling the support-vector-regression (SVR) model with an interval-parameter mixed integer linear programming (IMILP). The developed TSOM can not only predict the city's future waste generation amount, but also reflect dynamic, interactive, and uncertain characteristics of the MSW management system. Four kernel functions such as linear kernel, polynomial kernel, radial basis function, and multi-layer perception kernel are chosen based on three quantitative simulation performance criteria [i.e. prediction accuracy (PA), fitting accuracy (FA) and over all accuracy (OA)]. The SVR with polynomial kernel has accurate prediction performance for MSW generation rate, with all of the three quantitative simulation performance criteria being over 96%. Two cases are considered based on different waste management policies. The results are valuable for supporting the adjustment of the existing waste-allocation patterns to raise the city's waste diversion rate, as well as the capacity planning of waste management system to satisfy the city's increasing waste treatment/disposal demands.

  3. A Parallel Programming Model With Sequential Semantics

    Science.gov (United States)

    1996-01-01

    Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the...context of concurrency. In the thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation

  4. 可变线路式公交的两阶段车辆调度模型%Two-stage model for flex-route transit scheduling

    Institute of Scientific and Technical Information of China (English)

    邱丰; 李文权; 沈金星

    2014-01-01

    A two-stage scheduling model is designed for the flex-route transit to deal with predeter-mined requests and real-time requests.An optimal model for the vehicle routing problem,which minimizes passenger travel costs and vehicle operating costs,is built as the first-stage scheduling model to serve predetermined requests.The simulated annealing algorithm is developed to solve the first-stage model and an initial vehicle routing plan can be obtained.The second-stage scheduling model is established for real-time requests,and four types of passengers are arranged into the vehicle routing plan through the heuristic insertion algorithm.The simulation experiments based on a realistic case demonstrate the feasibility of this two-stage scheduling model,and the results indicate that there would be a better system performance if more passengers choose to make appointments for flex-route service.There is approximately a 10%improvement in system performance when demand reaches 25 passenger/h and 70%of passengers make appointments,compared with the scenario under pure real-time passenger demand.%针对可变线路式公交设计了一种可同时处理预约需求和实时需求的两阶段车辆调度模型。第1阶段模型以预约需求为服务对象,建立了以乘客出行成本和车辆运营成本最小为目标的路径优化模型,采用模拟退火算法对模型进行求解,获得车辆初始行驶路径方案。第2阶段模型以实时需求为服务目标,在原定行驶路径方案上利用启发式插入算法将4类乘客排入车辆行车计划中。基于实例的仿真试验验证了两阶段车辆调度模型的可行性,结果表明:通过提高乘客预约出行比例的方式可提升系统性能,本例中当乘客需求量达到25人/h、预约出行比例达到70%时,系统整体性能相较于纯动态需求条件下提升近10%。

  5. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  6. A Topological Model for Parallel Algorithm Design

    Science.gov (United States)

    1991-09-01

    New York, 1989. 108. J. Dugundji . Topology . Allen and Bacon, Rockleigh, NJ, 1966. 109. R. Duncan. A Survey of Parallel Computer Architectures. IEEE...Approved for public release; distribition unlimited 4N1f-e AFIT/DS/ENG/91-02 A TOPOLOGICAL MODEL FOR PARALLEL ALGORITHM DESIGN DISSERTATION Presented to...DC 20503. 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS A Topological Model For Parallel Algorithm Design 6. AUTHOR(S) Jeffrey A Simmers, Captain, USAF 7

  7. Immunohistochemical cellular distribution of proteins related to M phase regulation in early proliferative lesions induced by tumor promotion in rat two-stage carcinogenesis models.

    Science.gov (United States)

    Yafune, Atsunori; Taniai, Eriko; Morita, Reiko; Akane, Hirotoshi; Kimura, Masayuki; Mitsumori, Kunitoshi; Shibutani, Makoto

    2014-01-01

    We have previously reported that 28-day treatment with hepatocarcinogens increases liver cells expressing p21(Cip1), a G1/S checkpoint protein, and M phase proteins, i.e., nuclear Cdc2, Aurora B, phosphorylated-Histone H3 (p-Histone H3) and heterochromatin protein 1α (HP1α), in rats. To examine the roles of these markers in the early stages of carcinogenesis, we investigated their cellular distribution in several carcinogenic target organs using rat two-stage carcinogenesis models. Promoting agents targeting the liver (piperonyl butoxide and methapyrilene hydrochloride), thyroid (sulfadimethoxine), urinary bladder (phenylethyl isothiocyanate), and forestomach and glandular stomach (catechol) were administered to rats after initiation treatment for the liver with N-diethylnitrosamine, thyroid with N-bis(2-hydroxypropyl)nitrosamine, urinary bladder with N-butyl-N-(4-hydroxybutyl)nitrosamine, and forestomach and glandular stomach with N-methyl-N'-nitro-N-nitrosoguanidine. Numbers of cells positive for nuclear Cdc2, Aurora B, p-Histone H3 and HP1α increased within preneoplastic lesions as determined by glutathione S-transferase placental form in the liver or phosphorylated p44/42 mitogen-activated protein kinase in the thyroid, and hyperplastic lesions having no known preneoplastic markers in the urinary bladder, forestomach and glandular stomach. Immunoreactive cells for p21(Cip1) were decreased within thyroid preneoplastic lesions; however, they were increased within liver preneoplastic lesions and hyperplastic lesions in other organs. These results suggest that M phase disruption commonly occur during the formation of preneoplastic lesions and hyperplastic lesions. Differences in the expression patterns of p21(Cip1) between thyroid preneoplastic and proliferative lesions in other organs may reflect differences in cell cycle regulation involving G1/S checkpoint function between proliferative lesions in each organ.

  8. Inhibitory effect of α-lipoic acid on thioacetamide-induced tumor promotion through suppression of inflammatory cell responses in a two-stage hepatocarcinogenesis model in rats.

    Science.gov (United States)

    Fujii, Yuta; Segawa, Risa; Kimura, Masayuki; Wang, Liyun; Ishii, Yuji; Yamamoto, Ryuichi; Morita, Reiko; Mitsumori, Kunitoshi; Shibutani, Makoto

    2013-09-25

    To investigate the protective effect of α-lipoic acid (a-LA) on the hepatocarcinogenic process promoted by thioacetamide (TAA), we used a two-stage liver carcinogenesis model in N-diethylnitrosamine (DEN)-initiated and TAA-promoted rats. We examined the modifying effect of co-administered a-LA on the liver tissue environment surrounding preneoplastic hepatocellular lesions, with particular focus on hepatic macrophages and the mechanism behind the decrease in apoptosis of cells surrounding preneoplastic hepatocellular lesions during the early stages of hepatocellular tumor promotion. TAA increased the number and area of glutathione S-transferase placental form (GST-P)(+) liver cell foci and the numbers of proliferating and apoptotic cells in the liver. Co-administration with a-LA suppressed these effects. TAA also increased the numbers of ED2(+), cyclooxygenase-2(+), and heme oxygenase-1(+) hepatic macrophages as well as the number of CD3(+) lymphocytes. These effects were also suppressed by a-LA. Transcript levels of some inflammation-related genes were upregulated by TAA and downregulated by a-LA in real-time RT-PCR analysis. Outside the GST-P(+) foci, a-LA reduced the numbers of apoptotic cells, active caspase-8(+) cells and death receptor (DR)-5(+) cells. These results suggest that hepatic macrophages producing proinflammatory factors may be activated in TAA-induced tumor promotion. a-LA may suppress tumor-promoting activity by suppressing the activation of these macrophages and the subsequent inflammatory responses. Furthermore, a-LA may suppress tumor-promoting activity by suppressing the DR5-mediated extrinsic pathway of apoptosis and the subsequent regeneration of liver cells outside GST-P(+) foci.

  9. Reply to Aitchison and Ali: Reconciling Himalayan ophiolite and Asian magmatic arc records with a two-stage India-Asia collision model

    NARCIS (Netherlands)

    van Hinsbergen, D.J.J.; Lippert, P.C.; Dupont-Nivet, G.; McQuarrie, N.; Doubrovine, P.V.; Spakman, W.; Torsvik, T.H.

    2012-01-01

    We recently presented a compilation of paleomagnetic data arguing for Cretaceous extension within Greater India. These data imply that a Tibetan Himalayan (TH) microcontinent rifted away from India, opening an oceanic Greater India Basin (GIB) in its wake. Consequently, we postulated a two-stage Ind

  10. Design, Modelling and Simulation of Two-Phase Two-Stage Electronic System with Orthogonal Output for Supplying of Two-Phase ASM

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available This paper deals with the two-stage two-phase electronic systems with orthogonal output voltages and currents - DC/AC/AC. Design of two-stage DC/AC/AC high frequency converter with two-phase orthogonal output using single-phase matrix converter is also introduced. Output voltages of them are strongly nonharmonic ones, so they must be pulse-modulated due to requested nearly sinusoidal currents with low total harmonic distortion. Simulation experiment results of matrix converter for both steady and transient states for IM motors are given in the paper, also experimental verification under R-L load, so far. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and application with high frequency voltage sources.

  11. 考虑碳排放的两级供应链最优决策模型%Two-stage Supply Chain Optimal Model Considering Carbon Emissions

    Institute of Scientific and Technical Information of China (English)

    刘杰; 杨满; 李强

    2016-01-01

    随着全球化供应链环境复杂程度的提高,越来越多的企业也都在致力于解决企业面临的各种环境问题。基于传统的EOQ模型,构建由单个制造商和单个零售商组成的两级供应链。模型中分散决策由制造商决定产品的批发价格和碳排放量,由零售商决定产品的零售价格和订货批量。在算例数据中,分析了消费者低碳偏好系数和订货成本的变化对供应链分散决策和集中决策的影响,其中集中决策用粒子群算法进行求解,得出集中决策下供应链既可以拥有较高的市场需求,获得更多的利润,而且集中决策下的碳排放量更少,有利于环保。%With the improvement of complexity of global supply chain environment, more and more companies are working to solve the various environmental problems. Based on the traditional EOQ model, establishing the two-stage supply chain composed of a single manufacturer and single retailer. In decentralized decision-making, product's wholesale price and carbon emissions is determined by the manufacturer, and the retailer determine the retail price of the product and order quantity. In case data, this paper analyzes the changes of low carbon consumer preference coefficient and ordering cost impact on decentralized decision and centralized decision of supply chain, centralized decision-making uses particle swarm algorithm to solve the problem, and it is concluded that under the centralized decision-making supply chain can have high market demand, gain more profit, and have less carbon emissions, which is profit to environmental protection.

  12. Parallel Computing of Ocean General Circulation Model

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper discusses the parallel computing of the thirdgeneration Ocea n General Circulation Model (OGCM) from the State Key Laboratory of Numerical Mo deling for Atmospheric Science and Geophysical Fluid Dynamics(LASG),Institute of Atmosphere Physics(IAP). Meanwhile, several optimization strategies for paralle l computing of OGCM (POGCM) on Scalable Shared Memory Multiprocessor (S2MP) are presented. Using Message Passing Interface (MPI), we obtain super linear speedup on SGI Origin 2000 for parallel OGCM(POGCM) after optimization.

  13. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  14. Parallel models of associative memory

    CERN Document Server

    Hinton, Geoffrey E

    2014-01-01

    This update of the 1981 classic on neural networks includes new commentaries by the authors that show how the original ideas are related to subsequent developments. As researchers continue to uncover ways of applying the complex information processing abilities of neural networks, they give these models an exciting future which may well involve revolutionary developments in understanding the brain and the mind -- developments that may allow researchers to build adaptive intelligent machines. The original chapters show where the ideas came from and the new commentaries show where they are going

  15. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  16. A hybrid two-stage flexible flowshop scheduling problem with m identical parallel machines and a burn-in processor separately%组成于平行机和批处理机的二阶段混合流水作业问题

    Institute of Scientific and Technical Information of China (English)

    何龙敏; 孙世杰

    2007-01-01

    A hybrid two-stage flowshop scheduling problem was considered which involves m identical parallel machines at Stage 1 and a burn-in processor M at Stage 2, and the makespan was taken as the minimization objective. This scheduling problem is NP-hard in general. We divide it into eight subcases. Except for the following two subcases: (1) b ≥ an, max{m, B} < n;(2) a1 ≤ b ≤ an, m ≤B < n, for all other subcases, their NP-hardness was proved or pointed out, corresponding approximation algorithms were conducted and their worst-case performances were estimated. In all these approximation algorithms, the Multifit and PTAS algorithms were respectively used, as the jobs were scheduled in m identical parallel machines.

  17. Two stage gear tooth dynamics program

    Science.gov (United States)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  18. A Scalable Prescriptive Parallel Debugging Model

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Quarfot Nielsen, Niklas; Lee, Gregory L.

    2015-01-01

    Debugging is a critical step in the development of any parallel program. However, the traditional interactive debugging model, where users manually step through code and inspect their application, does not scale well even for current supercomputers due its centralized nature. While lightweight...

  19. Synthetic models of distributed memory parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Poplawski, D.A. (Michigan Technological Univ., Houghton, MI (USA). Dept. of Computer Science)

    1990-09-01

    This paper deals with the construction and use of simple synthetic programs that model the behavior of more complex, real parallel programs. Synthetic programs can be used in many ways: to construct an easily ported suite of benchmark programs, to experiment with alternate parallel implementations of a program without actually writing them, and to predict the behavior and performance of an algorithm on a new or hypothetical machine. Synthetic programs are constructed easily from scratch, from existing programs, and can even be constructed using nothing but information obtained from traces of the real program's execution.

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. Perception of successive brief objects as a function of stimulus onset asynchrony: model experiments based on two-stage synchronization of neuronal oscillators.

    Science.gov (United States)

    Bachmann, Talis; Kirt, Toomas

    2013-12-01

    Recently we introduced a new version of the perceptual retouch model incorporating two interactive binding operations-binding features for objects and binding the bound feature-objects with a large scale oscillatory system that acts as a mediary for the perceptual information to reach consciousness-level representation. The relative level of synchronized firing of the neurons representing the features of an object obtained after the second-stage synchronizing modulation is used as the equivalent of conscious perception of the corresponding object. Here, this model is used for simulating interaction of two successive featured objects as a function of stimulus onset asynchrony (SOA). Model output reproduces typical results of mutual masking-with shortest and longest SOAs first and second object correct perception rate is comparable while with intermediate SOAs second object dominates over the first one. Additionally, with shortest SOAs misbinding of features to form illusory objects is simulated by the model.

  2. The role of outside-school factors in science education: a two-stage theoretical model linking Bourdieu and Sen, with a case study

    Science.gov (United States)

    Gokpinar, Tuba; Reiss, Michael

    2016-05-01

    The literature in science education highlights the potentially significant role of outside-school factors such as parents, cultural contexts and role models in students' formation of science attitudes and aspirations, and their attainment in science classes. In this paper, building on and linking Bourdieu's key concepts of habitus, cultural and social capital, and field with Sen's capability approach, we develop a model of students' science-related capability development. Our model proposes that the role of outside-school factors is twofold, first, in providing an initial set of science-related resources (i.e. habitus, cultural and social capital), and then in conversion of these resources to science-related capabilities. The model also highlights the distinction between science-related functionings (outcomes achieved by individuals) and science-related capabilities (ability to achieve desired functionings), and argues that it is necessary to consider science-related capability development in evaluating the effectiveness of science education. We then test our theoretical model with an account of three Turkish immigrant students' science-related capabilities and the role of outside-school factors in forming and extending these capabilities. We use student and parent interviews, student questionnaires and in-class observations to provide an analysis of how outside-school factors influence these students' attitudes, aspirations and attainment in science.

  3. A two-stage planning and control model toward Economically Adapted Power Distribution Systems using analytical hierarchy processes and fuzzy optimization

    Energy Technology Data Exchange (ETDEWEB)

    Schweickardt, Gustavo [Instituto de Economia Energetica, Fundacion Bariloche, Centro Atomico Bariloche - Pabellon 7, Av. Bustillo km 9500, 8400 Bariloche (Argentina); Miranda, Vladimiro [INESC Porto, Instituto de Engenharia de Sistemas e Computadores do Porto and FEUP, Faculdade de Engenharia da Universidade do Porto, R. Dr. Roberto Frias, 378, 4200-465 Porto (Portugal)

    2009-07-15

    This work presents a model to evaluate the Distribution System Dynamic De-adaptation respecting its planning for a given period of Tariff Control. The starting point for modeling is brought about by the results from a multi-criteria method based on Fuzzy Dynamic Programming and on Analytic Hierarchy Processes applied in a mid/short-term horizon (stage 1). Then, the decision-making activities using the Hierarchy Analytical Processes will allow defining, for a Control of System De-adaptation (stage 2), a Vector to evaluate the System Dynamic Adaptation. It is directly associated to an eventual series of inbalances that take place during its evolution. (author)

  4. The Role of Outside-School Factors in Science Education: A Two-Stage Theoretical Model Linking Bourdieu and Sen, with a Case Study

    Science.gov (United States)

    Gokpinar, Tuba; Reiss, Michael

    2016-01-01

    The literature in science education highlights the potentially significant role of outside-school factors such as parents, cultural contexts and role models in students' formation of science attitudes and aspirations, and their attainment in science classes. In this paper, building on and linking Bourdieu's key concepts of habitus, cultural and…

  5. The Role of Outside-School Factors in Science Education: A Two-Stage Theoretical Model Linking Bourdieu and Sen, with a Case Study

    Science.gov (United States)

    Gokpinar, Tuba; Reiss, Michael

    2016-01-01

    The literature in science education highlights the potentially significant role of outside-school factors such as parents, cultural contexts and role models in students' formation of science attitudes and aspirations, and their attainment in science classes. In this paper, building on and linking Bourdieu's key concepts of habitus, cultural and…

  6. A Two-Stage Algorithm for the Closed-Loop Location-Inventory Problem Model Considering Returns in E-Commerce

    OpenAIRE

    Yanhui Li; Mengmeng Lu; Bailing Liu

    2014-01-01

    Facility location and inventory control are critical and highly related problems in the design of logistics system for e-commerce. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Focusing on the existing problem in e-commerce logistics system, we formulate a closed-loop location-inventory problem model considering returned merchandise to minimize the total cost which is produced in both forward and reverse logistics networks. To solve t...

  7. Methane and Environmental Change during the Paleocene-Eocene Thermal Maximum (PETM): Modeling the PETM Onset as a Two-stage Event

    Science.gov (United States)

    Carozza, David A.; Mysak, Lawrence A.; Schmidt, Gavin A.

    2011-01-01

    An atmospheric CH4 box model coupled to a global carbon cycle box model is used to constrain the carbon emission associated with the PETM and assess the role of CH4 during this event. A range of atmospheric and oceanic emission scenarios representing different amounts, rates, and isotopic signatures of emitted carbon are used to model the PETM onset. The first 3 kyr of the onset, a pre-isotope excursion stage, is simulated by the atmospheric release of 900 to 1100 Pg C CH4 with a delta C-13 of -22 to - 30 %. For a global average warming of 3 deg C, a release of CO2 to the ocean and CH4 to the atmosphere totalling 900 to 1400 Pg C, with a delta C-13 of -50 to -60%, simulates the subsequent 1 -kyr isotope excursion stage. To explain the observations, the carbon must have been released over at most 500 years. The first stage results cannot be associated with any known PETM hypothesis. However, the second stage results are consistent with a methane hydrate source. More than a single source of carbon is required to explain the PETM onset.

  8. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

  9. Two-stage model for time-varying effects of discrete longitudinal covariates with applications in analysis of daily process data.

    Science.gov (United States)

    Yang, Hanyu; Cranford, James A; Li, Runze; Buu, Anne

    2015-02-20

    This study proposes a generalized time-varying effect model that can be used to characterize a discrete longitudinal covariate process and its time-varying effect on a later outcome that may be discrete. The proposed method can be applied to examine two important research questions for daily process data: measurement reactivity and predictive validity. We demonstrate these applications using health risk behavior data collected from alcoholic couples through an interactive voice response system. The statistical analysis results show that the effect of measurement reactivity may only be evident in the first week of interactive voice response assessment. Moreover, the level of urge to drink before measurement reactivity takes effect may be more predictive of a later depression outcome. Our simulation study shows that the performance of the proposed method improves with larger sample sizes, more time points, and smaller proportions of zeros in the binary longitudinal covariate.

  10. Dynamic stiffness model of spherical parallel robots

    Science.gov (United States)

    Cammarata, Alessandro; Caliò, Ivo; D`Urso, Domenico; Greco, Annalisa; Lacagnina, Michele; Fichera, Gabriele

    2016-12-01

    A novel approach to study the elastodynamics of Spherical Parallel Robots is described through an exact dynamic model. Timoshenko arches are used to simulate flexible curved links while the base and mobile platforms are modelled as rigid bodies. Spatial joints are inherently included into the model without Lagrangian multipliers. At first, the equivalent dynamic stiffness matrix of each leg, made up of curved links joined by spatial joints, is derived; then these matrices are assembled to obtain the Global Dynamic Stiffness Matrix of the robot at a given pose. Actuator stiffness is also included into the model to verify its influence on vibrations and modes. The latter are found by applying the Wittrick-Williams algorithm. Finally, numerical simulations and direct comparison to commercial FE results are used to validate the proposed model.

  11. The construction of customized two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1990-01-01

    In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

  12. A parallel-pipelining software process model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Software process is a framework for effective and timely delivery of software system. The framework plays a crucial role for software success. However, the development of large-scale software still faces the crisis of high risks, low quality, high costs and long cycle time.This paper proposed a three-phase parallel-pipelining software process model for improving speed and productivity, and reducing software costs and risks without sacrificing software quality. In this model, two strategies were presented. One strategy, based on subsystem-cost priority, Was used to prevent software development cost wasting and to reduce software complexity as well; the other strategy, used for balancing subsystem complexity, was designed to reduce the software complexity in the later development stages. Moreover. The proposed function-detailed and workload-simplified subsystem pipelining software process model presents much higher parallelity than the concurrent incremental model. Finally, the component-based product line technology not only ensures software quality and further reduces cycle time, software costs. And software risks but also sufficiently and rationally utilizes previous software product resources and enhances the competition ability of software development organizations.

  13. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  14. A Network Model for Parallel Line Balancing Problem

    OpenAIRE

    Recep Benzer; Hadi Gökçen; Tahsin Çetinyokus; Hakan Çerçioglu

    2007-01-01

    Gökçen et al. (2006) have proposed several procedures and a mathematical model on single-model (product) assembly line balancing (ALB) problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for i...

  15. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  16. Performance Analysis of Two-stage Series-connected Inerter-spring-damper Suspension Based on Half-car Model%基于半车模型的两级串联型ISD悬架性能分析

    Institute of Scientific and Technical Information of China (English)

    陈龙; 张孝良; 聂佳梅; 汪若尘

    2012-01-01

    Based on the real analogy between the inerter-spring-damper mechanical system and the capacitor-inductor-resistor electrical system, an inerter-spring-damper (ISD) vehicle suspension system with two stages connected in series is proposed according to the principle of cascaded filter. In such a suspension system, the first stage is a conventional passive suspension, and the second stage is a parallel inerter-spring-damper. A half-car vehicle model is built to analyze performance of the suspension system under random and pulse input, and to investigate effects of the second suspension stiffness on system performance. The results indicate that in contrast to conventional suspension, the proposed suspension has a better dynamic performance, and improvements of 81%, 81%, -79% and 82.8% are obtained for PSD peak values of vertical acceleration of the centre of body mass, pitch acceleration of body, dynamic tire load of front and rear wheels in lower-frequency stage. The results further show that the proposed suspension can effectively suppress body resonance, better ride comfort, and improve trade-offs of the ride and handling.%基于“惯容-弹簧-阻尼”机械系统与“电容-电感-电阻”电子系统之间严格的对应相似关系,根据级联滤波的基本原理,以并联的弹簧和阻尼为第一级,并联的惯容器、弹簧和阻尼为第二级,构建一种两级串联型“惯容-弹簧-阻尼”车辆悬架系统.建立悬架的半车模型,分析随机和脉冲激励下悬架系统的综合性能,探讨第二级弹簧刚度对系统性能的影响.结果表明,与传统悬架相比,两级串联型“惯容-弹簧-阻尼”悬架系统具有良好的动态性能,车身质心垂直加速度、车身俯仰加速度、前轮胎动载荷、后轮胎动载荷功率谱密度低频峰值分别减小了81%、81%、79%、82.8%,有效抑制了车身共振,明显改善了车辆的乘坐舒适性,协调了平顺性与安全性之间的矛盾.

  17. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  18. Parallel computing in atmospheric chemistry models

    Energy Technology Data Exchange (ETDEWEB)

    Rotman, D. [Lawrence Livermore National Lab., CA (United States). Atmospheric Sciences Div.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  19. A Parallel, High-Fidelity Radar Model

    Science.gov (United States)

    Horsley, M.; Fasenfest, B.

    2010-09-01

    Accurate modeling of Space Surveillance sensors is necessary for a variety of applications. Accurate models can be used to perform trade studies on sensor designs, locations, and scheduling. In addition, they can be used to predict system-level performance of the Space Surveillance Network to a collision or satellite break-up event. A high fidelity physics-based radar simulator has been developed for Space Surveillance applications. This simulator is designed in a modular fashion, where each module describes a particular physical process or radar function (radio wave propagation & scattering, waveform generation, noise sources, etc.) involved in simulating the radar and its environment. For each of these modules, multiple versions are available in order to meet the end-users needs and requirements. For instance, the radar simulator supports different atmospheric models in order to facilitate different methods of simulating refraction of the radar beam. The radar model also has the capability to use highly accurate radar cross sections generated by the method of moments, accelerated by the fast multipole method. To accelerate this computationally expensive model, it is parallelized using MPI. As a testing framework for the radar model, it is incorporated into the Testbed Environment for Space Situational Awareness (TESSA). TESSA is based on a flexible, scalable architecture, designed to exploit high-performance computing resources and allow physics-based simulation of the SSA enterprise. In addition to the radar models, TESSA includes hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, optical brightness calculations, optical system models, object detection algorithms, orbit determination algorithms, simulation analysis and visualization tools. Within this framework, observations and tracks generated by the new radar model are compared to results from a phenomenological radar model. In particular, the new model will be

  20. Econometric Modelling of the Variations of Norway’s Export Trade across Continents and over Time: The Two-Stage Non-Full Rank Hierarchical Linear Econometric Model Approach

    Directory of Open Access Journals (Sweden)

    Yohannes Yebabe Tesfay

    2015-01-01

    Full Text Available This paper applies the two-stage hierarchical non-full rank linear econometric model to make a deep analysis based on revenue generated from key Norwegian export items over the world’s continents. The model’s ability to analyse the variation of Norway’s export trade gives us the following interesting details: (1 for each continent intra- and intervariation of export items, (2 access to deep knowledge about the characteristics of the Norway’s export items revenue, (3 quantifying the economic importance and sustainability of export items within continents; and finally (4 comparing a given export item economic importance across continents. The results suggest the following important policy implications for Norway. First, Europe is the most important trade partner for Norway. In fact, 81.5% of Norwegian export items are transported to Europe. Second, there is a structural shift in Norwegian exports from North and Central America to Asia and Oceania. Third, the new importance of Asia and Oceania is also emphasized by the 85% increase in export revenues over the period 1988–2012. The trade pattern has changed and trade policy must change accordingly. The analysis has shown that in 2012 there are two important export continents for Norway: Europe and Asia and Oceania.

  1. A Parallel Lattice Boltzmann Model of a Carotid Artery

    Science.gov (United States)

    Boyd, J.; Ryan, S. J.; Buick, J. M.

    2008-11-01

    A parallel implementation of the lattice Boltzmann model is considered for a three dimensional model of the carotid artery. The computational method and its parallel implementation are described. The performance of the parallel implementation on a Beowulf cluster is presented, as are preliminary hemodynamic results.

  2. Hierarchical Bulk Synchronous Parallel Model and Performance Optimization

    Institute of Scientific and Technical Information of China (English)

    HUANG Linpeng; SUNYongqiang; YUAN Wei

    1999-01-01

    Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.

  3. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  4. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  5. Two Stage Gear Tooth Dynamics Program

    Science.gov (United States)

    1989-08-01

    cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the two stage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

  6. Exploitation of Parallelism in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Baer, F.; Tribbia, J.J.; Williamson, D.L.

    1999-03-01

    The US Department of Energy (DOE), through its CHAMMP initiative, hopes to develop the capability to make meaningful regional climate forecasts on time scales exceeding a decade, such capability to be based on numerical prediction type models. We propose research to contribute to each of the specific items enumerated in the CHAMMP announcement (Notice 91-3); i.e., to consider theoretical limits to prediction of climate and climate change on appropriate time scales, to develop new mathematical techniques to utilize massively parallel processors (MPP), to actually utilize MPPs as a research tool, and to develop improved representations of some processes essential to climate prediction. In particular, our goals are to: (1) Reconfigure the prediction equations such that the time iteration process can be compressed by use of MMP architecture, and to develop appropriate algorithms. (2) Develop local subgrid scale models which can provide time and space dependent parameterization for a state- of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics. (3) Capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. By careful choice of initial states, many realizations of the climate system can be determined concurrently and more realistic assessments of the climate prediction can be made in a realistic time frame. To explore these initiatives, we will exploit all available computing technology, and in particular MPP machines. We anticipate that significant improvements in modeling of climate on the decadal and longer time scales for regional space scales will result from our efforts.

  7. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  8. 采用两阶段优化模型的电动汽车充电站内有序充电策略%Two-Stage Optimization Model Based Coordinated Charging for EV Charging Station

    Institute of Scientific and Technical Information of China (English)

    张良; 严正; 冯冬涵; 许少伦; 李乃湖; 景雷

    2014-01-01

    Under the premise of satisfying the charging demand of electric vehicle (EV) and complying with the restriction of distribution transformer capacity, a first-stage optimal EV charging model, which takes the maximized charging revenue of the charging station as the objective, is established. Considering maximizing the incentive to reducing peak-valley difference given by grid corporation and taking the maximum charging revenue, which is not lower than that obtained by the first-stage optimization, as constraint, the second-stage optimization model is built. Based on the driving habits of EV users, the charging demand of EV users is simulated by Monte Carlo method, and the economic benefit of charging station and the load condition of distribution transformer under three situations, namely the uncoordinated charging, the charging under the first-stage optimization model and the charging under the two-stage optimization model, are simulated and analyzed. Research results show that using the first-stage optimization model and the second-stage optimization model the economic benefit of charging station can be evidently improved. However, under current time-of-use (TOU) mechanism, new peak load will occur when only the first-stage optimization model is used to control the charging of lots of EVs, and yet the improved two-stage optimization model can play a significant role in further increasing economic benefit of charging station, reducing peak-valley difference and smoothing the load curves, besides, the computational cost of the improved two-stage optimization model is still low, so it is suitable for practical application.%在满足电动汽车用户充电需求及配电变压器容量限制的前提下,建立了以充电站充电收益最大化为目标的第一阶段优化模型。考虑最大化电网公司对缩小峰谷差所给予的激励,以不低于第一阶段优化所求得的最大充电收益为约束,建立了第二阶段优化模型。根据用

  9. Two-stage approach to full Chinese parsing

    Institute of Scientific and Technical Information of China (English)

    Cao Hailong; Zhao Tiejun; Yang Muyun; Li Sheng

    2005-01-01

    Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

  10. DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    Javed Ali

    2012-04-01

    Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.

  11. The Modeling of the ERP Systems within Parallel Calculus

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2011-01-01

    Full Text Available As we know from a few years, the basic characteristics of ERP systems are: modular-design, central common database, integration of the modules, data transfer between modules done automatically, complex systems and flexible configuration. Because this, is obviously a parallel approach to design and implement them within parallel algorithms, parallel calculus and distributed databases. This paper aims to support these assertions and provide a model, in summary, what could be an ERP system based on parallel computing and algorithms.

  12. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

  13. Two Stage Sibling Cycle Compressor/Expander.

    Science.gov (United States)

    1994-02-01

    vol. 5, p. 424. 11. L. Bauwens and M.P. Mitchell, " Regenerator Analysis: Validation of the MS*2 Stirling Cycle Code," Proc. XVIIIth International...PL-TR--94-1051 PL-TR-- 94-1051 TWO STAGE SIBLING CYCLE COMPRESSOR/EXPANDER Matthew P. Mitchell . Mitchell/ Stirling Machines/Systems, Inc. No\\ 1995...ty. THIS PAGE IS UNCLASSIFIED PL-TR-94-1051 This final report was prepared byMitchell/ Stirling Machines/Systems, Inc., Berkeley, CA under Contract

  14. Income and Poverty across SMSAs: A Two-Stage Analysis

    OpenAIRE

    1993-01-01

    Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

  15. Model and Two-stage Algorithm on Dynamic Vehicle Routing Problem%一类动态车辆路径问题模型和两阶段算法

    Institute of Scientific and Technical Information of China (English)

    饶卫振; 金淳; 刘锋; 杨磊

    2015-01-01

    In order to effectively solve dynamic vehicle routing problem (DVRP), this paper analyzes the substantial effect of four main categories of dynamic information on classical vehicle routing problem, and transform DVRP into multiple static fleet size and mixed open vehicle routing problems (FSMOVRP). And FSMOVRP could be further converted to multiple capacitated vehicle routing problems (CVRP). The model based on CVRP is built up for DVRP. After that a two-stage algorithm is proposed to solve DVRP model according to the analysis of DVRP characteristics. In the first stage, a fast construction algorithm with merely O(nlogn) complexity is proposed on the basis of delivery region cutting strategy by K-d trees method. In the second stage, a hybrid local search algorithm is designed by analysis of structural principal of algorithm’s solution searching space. Finally for the purpose of algorithm verification, we design and solve 36 DVRP instances generated from 12 large scale CVRP benchmark instances. The results demonstrate the effectiveness of the model and two-stage solving algorithm.%针对一类动态车辆路径问题,分析4种主要类型动态信息对传统车辆路径问题的本质影响,将动态车辆路径问题(Dynamic Vehicle Routing Problem, DVRP)转化为多个静态的多车型开放式车辆路径问题(The Fleet Size and Mixed Open Vehicle Routing Problem, FSMOVRP),并进一步转化为多个带能力约束车辆路径问题(Capacitated Vehicle Routing Problem, CVRP),基于CVRP模型建立了DVRP模型;然后,在分析DVRP问题特点基础上,提出两阶段算法,第一阶段基于利用K-d trees对配送区域进行分割的策略,提出了复杂度仅为O(nlogn)的快速构建型算法,第二阶段通过分析算法搜索解空间结构原理,设计混合局部搜索算法;最后,基于现有12个大规模CVRP标准算例,设计并求解36个DVRP算例。求解结果表明了模型和两阶段算法的有效性。

  16. A Network Model for Parallel Line Balancing Problem

    Directory of Open Access Journals (Sweden)

    Recep Benzer

    2007-01-01

    Full Text Available Gökçen et al. (2006 have proposed several procedures and a mathematical model on single-model (product assembly line balancing (ALB problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for interested researchers.

  17. Harmony Theory: Problem Solving, Parallel Cognitive Models, and Thermal Physics.

    Science.gov (United States)

    Smolensky, Paul; Riley, Mary S.

    This document consists of three papers. The first, "A Parallel Model of (Sequential) Problem Solving," describes a parallel model designed to solve a class of relatively simple problems from elementary physics and discusses implications for models of problem-solving in general. It is shown that one of the most salient features of problem…

  18. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  19. PDDP: A data parallel programming model. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Warren, K.H.

    1995-06-01

    PDDP, the Parallel Data Distribution Preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP impelments High Performance Fortran compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the (WRERE?) construct. Distribued data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared-memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  20. Mathematical model partitioning and packing for parallel computer calculation

    Science.gov (United States)

    Arpasi, Dale J.; Milner, Edward J.

    1986-01-01

    This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.

  1. Classification in two-stage screening.

    Science.gov (United States)

    Longford, Nicholas T

    2015-11-10

    Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in two stages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

  2. Modelos lineares e não lineares inteiros para problemas da mochila bidimensional restrita a 2 estágios Linear and nonlinear integer models for constrained two-stage two-dimensional knapsack problems

    Directory of Open Access Journals (Sweden)

    Horacio Hideki Yanasse

    2013-01-01

    Full Text Available Neste trabalho revemos alguns modelos lineares e não lineares inteiros para gerar padrões de corte bidimensionais guilhotinados de 2 estágios, incluindo os casos exato e não exato e restrito e irrestrito. Esses problemas são casos particulares do problema da mochila bidimensional. Apresentamos também novos modelos para gerar esses padrões de corte, baseados em adaptações ou extensões de modelos para gerar padrões de corte bidimensionais restritos 1-grupo. Padrões 2 estágios aparecem em diferentes processos de corte, como, por exemplo, em indústrias de móveis e de chapas de madeira. Os modelos são úteis para a pesquisa e o desenvolvimento de métodos de solução mais eficientes, explorando estruturas particulares, a decomposição do modelo, relaxações do modelo etc. Eles também são úteis para a avaliação do desempenho de heurísticas, já que permitem (pelo menos para problemas de tamanho moderado uma estimativa do gap de otimalidade de soluções obtidas por heurísticas. Para ilustrar a aplicação dos modelos, analisamos os resultados de alguns experimentos computacionais com exemplos da literatura e outros gerados aleatoriamente. Os resultados foram produzidos usando um software comercial conhecido e mostram que o esforço computacional necessário para resolver os modelos pode ser bastante diferente.In this work we review some linear and nonlinear integer models to generate two stage two-dimensional guillotine cutting patterns, including the constrained, non constrained, exact and non exact cases. These problems are particular cases of the two dimensional knapsack problems. We also present new models to generate these cutting patterns, based on adaptations and extensions of models that generate one-group constrained two dimensional cutting patterns. Two stage patterns arise in different cutting processes like, for instance, in the furniture industry and wooden hardboards. The models are useful for the research and

  3. Graph Partitioning Models for Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  4. Parallelism and optimization of numerical ocean forecasting model

    Science.gov (United States)

    Xu, Jianliang; Pang, Renbo; Teng, Junhua; Liang, Hongtao; Yang, Dandan

    2016-10-01

    According to the characteristics of Chinese marginal seas, the Marginal Sea Model of China (MSMC) has been developed independently in China. Because the model requires long simulation time, as a routine forecasting model, the parallelism of MSMC becomes necessary to be introduced to improve the performance of it. However, some methods used in MSMC, such as Successive Over Relaxation (SOR) algorithm, are not suitable for parallelism. In this paper, methods are developedto solve the parallel problem of the SOR algorithm following the steps as below. First, based on a 3D computing grid system, an automatic data partition method is implemented to dynamically divide the computing grid according to computing resources. Next, based on the characteristics of the numerical forecasting model, a parallel method is designed to solve the parallel problem of the SOR algorithm. Lastly, a communication optimization method is provided to avoid the cost of communication. In the communication optimization method, the non-blocking communication of Message Passing Interface (MPI) is used to implement the parallelism of MSMC with complex physical equations, and the process of communication is overlapped with the computations for improving the performance of parallel MSMC. The experiments show that the parallel MSMC runs 97.2 times faster than the serial MSMC, and root mean square error between the parallel MSMC and the serial MSMC is less than 0.01 for a 30-day simulation (172800 time steps), which meets the requirements of timeliness and accuracy for numerical ocean forecasting products.

  5. 两阶段的贝叶斯模型选择与筛选试验分析%Two-stage Bayesian model choice and analysis of screening experiments

    Institute of Scientific and Technical Information of China (English)

    汪建均; 马义中; 汪新

    2011-01-01

    As for fractional factorial experiment design with non-normal responses, a method of two-stage Bayesian model choice was proposed in the paper when the number of the factors in screening experiments is large. Firstly, the MCMC method was used to simulate dynamically the Markov Chain of every parameter's posterior distribution in generalized linear models, and the significant level of the factors was identified according to the Bayesian posterior probability of every parameter which is more than or less than zero, then initial current model and candidate models were obtained by the significant level of these factors. Secondly, the significant factors were identified to establish a model with best short-term predictions by means of the Bayesian model assessment criterion based on the deviance information criterion (DIC), which was used to stepwise optimize the output from the current model and candidate models. Finally, a practical industrial example reveals that the proposed method can identify effectively significant factors in fractional factorial experiment design with non-normal responses.%针对非正态响应的部分因子试验设计,当筛选试验所涉及的因子数目较大时,提出了一种两阶段的贝叶斯模型选择方法.首先,运用蒙特卡洛(MCMC)方法模拟广义线性模型各参数的后验分布,并根据各参数大于零或小于零的后验概率考察各变量的显著性,得到初始的当前模型与候选模型;其次,利用贝叶斯模型评估准则DIC对当前模型与候选模型进行逐步迭代优化,筛选出显著性因子,得到了具有最佳短期预测性能的模型;最后,实际的工业案例说明此方法能够有效处理非正态响应部分因子试验中显著性因子筛选问题.

  6. Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    OpenAIRE

    Aviram, Amittai; Ford, Bryan

    2009-01-01

    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...

  7. Parallel Evolutionary Modeling for Nonlinear Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We introduce a new parallel evolutionary algorithm in modeling dynamic systems by nonlinear higher-order ordinary differential equations (NHODEs). The NHODEs models are much more universal than the traditional linear models. In order to accelerate the modeling process, we propose and realize a parallel evolutionary algorithm using distributed CORBA object on the heterogeneous networking. Some numerical experiments show that the new algorithm is feasible and efficient.

  8. An inexact two-stage stochastic model for water resource management under uncertainty%基于水质模拟的不确定条件下两阶段随机水资源规划模型

    Institute of Scientific and Technical Information of China (English)

    徐毅; 汤烨; 付殿峥; 解玉磊

    2012-01-01

    针对流域内不同企业的水资源分配及企业生产污染排放导致的水环境问题,运用区间两阶段随机规划的方法,耦合区间两阶段模型(ITSP)和区间水质模型(IS-P),建立不确定两阶段随机水质-水量耦合规划模型(ITSP-SP).该模型以流域内系统利益最大为目标函数,模拟了流域内各个企业的水量分配及排污过程中河道水质变化,并在保证河流水质达标前提下优化预计分配水量,调整企业生产规模.通过模型运算得到区间解,为管理者提供了多样的决策方案.并且,该模型充分考虑不确定因素对系统利益的影响,能够有效的规避系统决策失误及方案缺失现象.%In order to solve the water allocation and water pollution problems during different industries production process in a river basin,an inexact twostage stochastic programming integrated with water quality simulation model was developed for water resources management and water quality improvement act planning under uncertainty.The model was coupled with an inexact two-stage stochastic programming (ITSP) and an inexact Streeter-Phelps model (IS-P).The model optimized water resources quantity to each industrial plants with maximization of benefits of the system as an objective function by simulating water quality change trends under different inflow levels.Interactive algorithms were utilized for finding solutions to the ITSP-SP model.The solutions can provide multiple decision-making patterns for water resource managers.Meanwhile,this model has capability of analyzing the impacts of various uncertainty factors on benefits of the system,and avoiding the mistake of decision-making and the lack of decision-alternatives.

  9. A Two-stage Robust Optimization Model for Emergency Facility Location Problems under Uncertainties%不确定环境下应急设施选址问题两阶段鲁棒优化模型

    Institute of Scientific and Technical Information of China (English)

    杜博; 周泓

    2016-01-01

    For emergency logistics management,decision making of supply distribution facility location is important. According to the uncertainties in emergencies,a two-stage robust optimization model for emer-gency facility location problems to achieve coordination between“pre-location”and“re-location”is pro-posed. In the first stage when demand,cost and facility disruption is uncertain,in the consideration of dif-ferent needs of pre-disaster planning,post-disaster response and facility re-location,a robust“pre-loca-tion”model is presented based on p-center model. In the second stage,with the acquisition of post-disas-ter information,a“re-location”model for building new facilities is presented based on reactive repairing and adjustment for previous strategies. A numerical study shows the model is more effective than traditional p-center model for emergency facility location.%对于应急物流管理而言,应急物资集散中心选址是一个重要的决策要素。针对应急突发事件的不确定性特征,本文提出了一个应急设施选址问题两阶段鲁棒优化模型,以实现“预选址—重选址”两者的协同优化。第一阶段在需求和成本变动、设施损毁存在不确定因素的情况下,综合考虑选址策略在灾前规划、灾后反应、设施重建阶段的不同需求,建立了一种基于p-center的鲁棒“预选址”模型;第二阶段针对灾后新信息的获得,建立了一种基于反应式修复和调整策略的新建设施“重选址”模型。算例分析表明,本文模型对于应急设施选址问题比传统p-center模型更为合理有效。

  10. Two-Stage Over-the-Air (OTA Test Method for LTE MIMO Device Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ya Jing

    2012-01-01

    Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

  11. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    Science.gov (United States)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  12. Development of a Massively Parallel NOGAPS Forecast Model

    Science.gov (United States)

    2016-06-07

    parallel computer architectures. These algorithms will be critical for inter- processor communication dependent and computationally intensive model...to exploit massively parallel processor (MPP), distributed memory computer architectures. Future increases in computer power from MPP’s will allow...passing (MPI) is the paradigm chosen for communication between distributed memory processors. APPROACH Use integrations of the current operational

  13. Two-Stage Aggregate Formation via Streams in Myxobacteria

    Science.gov (United States)

    Alber, Mark; Kiskowski, Maria; Jiang, Yi

    2005-03-01

    In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

  14. Gerenciamento de resultados em bancos com uso de TVM: validação de modelo de dois estágios Securities-based earnings management in nanks: validation of a two-stage model

    Directory of Open Access Journals (Sweden)

    José Alves Dantas

    2013-04-01

    menor porte e nos controlados por capital privado.Studies investigating earnings management in banks have been particularly concerned with the use of Loan Loss Provisions (LLP and mainly use two-stage models to identify discretionary management actions. Another type of record that has received attention from researchers in identifying discretionary management actions is the classification and measurement of the fair value of securities. In this case, however, one-stage models have prevailed. The present study aims to develop and validate a two-stage model for the identification of discretionary management actions using gains obtained from securities. Our model incorporates macroeconomic indicators and specific attributes of the securities portfolios to the traditional parameters used in models previously utilized in the literature. To validate the proposed model, the results are compared with the results from the estimation of a one-stage model - a methodology widely used in the literature. Tests conducted with the two models reveal evidence of income smoothing using securities and the classification of available-for-sale securities among the actions taken by management. The consistency of the results across the two models validates the proposed model, thereby contributing to the development of research on the topic that is not only concerned with determining whether earnings management is practiced but also whether it can be associated with other variables. We also find that securities-based earnings management is more significant in smaller-sized banks and in banks controlled by private capital.

  15. Modeling and Adaptive Control of a Planar Parallel Mechanism

    Institute of Scientific and Technical Information of China (English)

    敖银辉; 陈新

    2004-01-01

    Dynamic model and control strategy of parallel mechanism have always been a problem in robotics research. In this paper,different dynamics formulation methods are discussed first, A model of redundant driven parallel mechanism with a planar parallel manipulator is then constructed as an example. A nonlinear adaptive control method is introduced. Matrix pseudo-inversion is used to get a desired actuator torque from a desired end-effector coordinate while the feedback torque is directly calculated in the actuator space. This treatment avoids forward kinematics computation that is very difficult in a parallel mechanism. Experiments with PID together with the descibed adaptive control strategy were carried out for a planar parallel mechanism. The results show that the proposed adaptive controller outperforms conventional PID methods in tracking desired input at a high speed,

  16. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  17. Parallelization of the NASA Goddard Cumulus Ensemble Model for Massively Parallel Computing

    Directory of Open Access Journals (Sweden)

    Hann-Ming Henry Juang

    2007-01-01

    Full Text Available Massively parallel computing, using a message passing interface (MPI, has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE model. The implementation uses the domainresemble concept to design a code structure for both the whole domain and sub-domains after decomposition. Instead of inserting a group of MPI related statements into the model routine, these statements are packed into a single routine. In other words, only a single call statement to the model code is utilized once in a place, thus there is minimal impact on the original code. Therefore, the model is easily modified and/or managed by the model developers and/or users, who have little knowledge of massively parallel computing.

  18. Dynamic Distribution Model with Prime Granularity for Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Dynamic distribution model is one of the best schemes for parallel volume rendering. However, in homogeneous cluster system, since the granularity is traditionally identical, all processors communicate almost simultaneously and computation load may lose balance. Due to problems above, a dynamic distribution model with prime granularity for parallel computing is presented.Granularities of each processor are relatively prime, and related theories are introduced. A high parallel performance can be achieved by minimizing network competition and using a load balancing strategy that ensures all processors finish almost simultaneously. Based on Master-Slave-Gleaner (MSG) scheme, the parallel Splatting Algorithm for volume rendering is used to test the model on IBM Cluster 1350 system. The experimental results show that the model can bring a considerable improvement in performance, including computation efficiency, total execution time, speed, and load balancing.

  19. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  20. Two-Stage Heuristic Algorithm for Aircraft Recovery Problem

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2017-01-01

    Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

  1. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  2. Parallel local approximation MCMC for expensive models

    OpenAIRE

    Conrad, Patrick; Davis, Andrew; Marzouk, Youssef; Pillai, Natesh; Smith, Aaron

    2016-01-01

    Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of partial differential equations. In recent work [Conrad et al. JASA 2015, arXiv:1402.1694] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior--adapted approximations harness regularity of the model to reduce the c...

  3. Two-Stage Model of Stereotype Activation Based on Face Perception%基于面孔知觉的刻板印象激活两阶段模型

    Institute of Scientific and Technical Information of China (English)

    张晓斌; 佐斌

    2012-01-01

    previous research paradigm was not suitable to explore the processing process of stereotype activation. The other reason was that, under the framework of social cognition, researchers have neglected the effects of perceptual processing to stereotype activation. Based on the above analysis, from the more ecological validity perspective of person construal, the authors proposed two-stage model of stereotype activation and verified it through two experiments. In Experiment 1, 32 participants were randomly selected and assigned to the experimental treatments. The authors compared the reaction time of gender categorization, the judgment of stereotype match in priming paradigm and the judgment of stereotype match in simultaneous presentation paradigm. In Experiment 2, 33 participants were randomly selected and assigned to the experimental treatments. Based on simultaneous presentation paradigm, operating the difficulty of extracting social category from faces by the face deformation, the authors explored the effect of face perception on stereotype activation. The results of Experiment 1 showed that the reaction time of the judgment of stereotype match under simultaneous presentation paradigm was significantly longer than the reaction time of gender categorization and the judgment of stereotype match in priming paradigm respectively, and it was equal to the sum of the reaction time of gender categorization and the judgment of stereotype match under priming paradigm. The effect of face presentation (inverted) to stereotype activation was different between priming paradigm and simultaneous presentation paradigm. The results of Experiment 2 showed that, with the increase of the difficulty to extract social category from faces, the reaction time of stereotype activation became longer. The results of two experiments confirmed two-stage model of stereotype activation proposed by the authors, that is, the stage of social category activation based on face perception and the stage of

  4. Parallel Dynamics of Continuous Hopfield Model Revisited

    Science.gov (United States)

    Mimura, Kazushi

    2009-03-01

    We have applied the generating functional analysis (GFA) to the continuous Hopfield model. We have also confirmed that the GFA predictions in some typical cases exhibit good consistency with computer simulation results. When a retarded self-interaction term is omitted, the GFA result becomes identical to that obtained using the statistical neurodynamics as well as the case of the sequential binary Hopfield model.

  5. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    -flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication. We present a simple nested data-parallel functional language and associated cost semantics that retains NESL......'s intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  6. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Modeling and Control of Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan

    2012-01-01

    In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been tes...

  8. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa.

    Science.gov (United States)

    Yadavalli, Rajasri; Heggers, Goutham Rao Venkata Naga

    2013-12-19

    Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

  9. Modelling parallel programs and multiprocessor architectures with AXE

    Science.gov (United States)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  10. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  11. Two-stage Security Controls Selection

    NARCIS (Netherlands)

    Yevseyeva, I.; Basto, Fernandes V.; Moorsel, van A.; Janicke, H.; Michael, Emmerich T. M.

    2016-01-01

    To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security

  12. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  13. Advances in parallel computer technology for desktop atmospheric dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D. [Pacific Northwest National Lab., Richland, WA (United States); Allwine, K.J. [Allwine Enviornmental Serv., Richland, WA (United States)

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  14. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  15. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  16. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  17. Term Structure Models with Parallel and Proportional Shifts

    DEFF Research Database (Denmark)

    Armerin, Frederik; Björk, Tomas; Astrup Jensen, Bjarne

    this general framework we show that there does indeed exist a large variety of nontrivial parallel shift term structure models, and we also describe these in detail. We also show that there exists no nontrivial flat term structure model. The same analysis is repeated for the similar case, where the yield curve...

  18. Modeling groundwater flow on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  19. Vectorial Preisach-type model designed for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Stancu, Alexandru [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania)]. E-mail: alstancu@uaic.ro; Stoleriu, Laurentiu [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania); Andrei, Petru [Electrical and Computer Engineering, Florida State University, Tallahassee, FL (United States); Electrical and Computer Engineering, Florida A and M University, Tallahassee, FL (United States)

    2007-09-15

    Most of the hysteresis phenomenological models are scalar, while all the magnetization processes are vectorial. The vector models-phenomenological or micromagnetic (physical)-are time consuming and sometimes difficult to implement. In this paper, we introduce a new vector Preisach-type model that uses micromagnetic results to simulate the magnetic response of a system of several tens of thousands of pseudo-particles. The model has a modular structure that allows easy implementation for parallel computing.

  20. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  1. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  2. Genetic Algorithm Modeling with GPU Parallel Computing Technology

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Pescapé, Antonio; Longo, Giuseppe; Ventre, Giorgio

    2012-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU / CUDA parallel computing technology. The model was derived from a multi-core CPU serial implementation, named GAME, already scientifically successfully tested and validated on astrophysical massive data classification problems, through a web application resource (DAMEWARE), specialized in data mining based on Machine Learning paradigms. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm has provided an exploit of the internal training features of the model, permitting a strong optimization in terms of processing performances and scalability.

  3. Inverse kinematics model of parallel macro-micro manipulator system

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    An improved design, which employs the integration of optic, mechanical and electronic technologies for the next generation large radio telescope, is presented in this note. The authors propose the concept of parallel macro-micro manipulator system from the feed support structure with a rough tuning subsystem based on a cable structure and a fine tuning subsystem based on the Stewart platform. According to the requirement of astronomical observation, the inverse kinematics model of this parallel macro-micro manipulator system is deduced. This inverse kinematics model is necessary for the computer-controlled motion of feed.

  4. Advanced parallel programming models research and development opportunities.

    Energy Technology Data Exchange (ETDEWEB)

    Wen, Zhaofang.; Brightwell, Ronald Brian

    2004-07-01

    There is currently a large research and development effort within the high-performance computing community on advanced parallel programming models. This research can potentially have an impact on parallel applications, system software, and computing architectures in the next several years. Given Sandia's expertise and unique perspective in these areas, particularly on very large-scale systems, there are many areas in which Sandia can contribute to this effort. This technical report provides a survey of past and present parallel programming model research projects and provides a detailed description of the Partitioned Global Address Space (PGAS) programming model. The PGAS model may offer several improvements over the traditional distributed memory message passing model, which is the dominant model currently being used at Sandia. This technical report discusses these potential benefits and outlines specific areas where Sandia's expertise could contribute to current research activities. In particular, we describe several projects in the areas of high-performance networking, operating systems and parallel runtime systems, compilers, application development, and performance evaluation.

  5. Financial Data Modeling by Using Asynchronous Parallel Evolutionary Algorithms

    Institute of Scientific and Technical Information of China (English)

    Wang Chun; Li Qiao-yun

    2003-01-01

    In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.

  6. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  7. Parallel finite element modeling of earthquake ground response and liquefaction

    Institute of Scientific and Technical Information of China (English)

    Jinchi Lu(陆金池); Jun Peng(彭军); Ahmed Elgamal; Zhaohui Yang(杨朝晖); Kincho H. Law

    2004-01-01

    Parallel computing is a promising approach to alleviate the computational demand in conducting large-scale finite element analyses. This paper presents a numerical modeling approach for earthquake ground response and liquefaction using the parallel nonlinear finite element program, ParCYCLIC, designed for distributed-memory message-passing parallel computer systems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid-fluid formulation. A constitutive model calibrated by physical tests represents the salient characteristics of sand liquefaction and associated accumulation of shear deformations. Key elements of the computational strategy employed in ParCYCLIC include the development of a parallel sparse direct solver, the deployment of an automatic domain decomposer, and the use of the Multilevel Nested Dissection algorithm for ordering of the finite element nodes. Simulation results of centrifuge test models using ParCYCLIC are presented. Performance results from grid models and geotechnical simulations show that ParCYCLIC is efficiently scalable to a large number of processors.

  8. The Extended Parallel Process Model: Illuminating the Gaps in Research

    Science.gov (United States)

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  9. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    Science.gov (United States)

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  10. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  11. Postscript: Parallel Distributed Processing in Localist Models without Thresholds

    Science.gov (United States)

    Plaut, David C.; McClelland, James L.

    2010-01-01

    The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…

  12. Modeling and optimization of parallel and distributed embedded systems

    CERN Document Server

    Munir, Arslan; Ranka, Sanjay

    2016-01-01

    This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.

  13. X: A Comprehensive Analytic Model for Parallel Machines

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ang; Song, Shuaiwen; Brugel, Eric; Kumar, Akash; Chavarría-Miranda, Daniel; Corporaal, Henk

    2016-05-23

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  14. Dynamic modeling of flexible-links planar parallel robots

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper presents a finite element-based method for dynamic modeling of parallel robots with flexible links and rigid moving platform.The elastic displacements of flexible links are investigated while considering the coupling effects between links due to the structural flexibility.The kinematic constraint conditions and dynamic constraint conditions for elastic displacements are presented.Considering the effects of distributed mass,lumped mass,shearing deformation,bending deformation,tensile deformation and lateral displacements,the Kineto-Elasto dynamics (KED) theory and Lagrange formula are used to derive the dynamic equations of planar flexible-links parallel robots.The dynamic behavior of the flexible-links planar parallel robot is well illustrated through numerical simulation of a planar 3-RRR parallel robot.Compared with the results of finite element software SAMCEF,the numerical simulation results show good coherence of the proposed method.The flexibility of links is demonstrated to have a significant impact on the position error and orientation error of the flexiblelinks planar parallel robot.

  15. Parallelization of a hydrological model using the message passing interface

    Science.gov (United States)

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  16. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  17. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, P; Song, Y T; Chao, Y; Zhang, H

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.

  18. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the sou

  19. Parallelization of MATLAB for Euro50 integrated modeling

    Science.gov (United States)

    Browne, Michael; Andersen, Torben E.; Enmark, Anita; Moraru, Dan; Shearer, Andrew

    2004-09-01

    MATLAB and its companion product Simulink are commonly used tools in systems modelling and other scientific disciplines. A cross-disciplinary integrated MATLAB model is used to study the overall performance of the proposed 50m optical and infrared telescope, Euro50. However the computational requirements of this kind of end-to-end simulation of the telescope's behaviour, exceeds the capability of an individual contemporary Personal Computer. By parallelizing the model, primarily on a functional basis, it can be implemented across a Beowulf cluster of generic PCs. This requires MATLAB to distribute in some way data and calculations to the cluster nodes and combine completed results. There have been a number of attempts to produce toolkits to allow MATLAB to be used in a parallel fashion. They have used a variety of techniques. Here we present findings from using some of these toolkits and proposed advances.

  20. Numerical modeling of parallel-plate based AMR

    DEFF Research Database (Denmark)

    In this work we present an improved 2-dimensional numerical model of a parallel-plate based AMR. The model includes heat transfer in fluid and magnetocaloric domains respectively. The domains are coupled via inner thermal boundaries. The MCE is modeled either as an instantaneous change between high...... and low field or as a magnetic field profile including the actual physical movement of the regenerator block in and out of field, i.e. as a source term in the thermal equation for the magnetocaloric material (MCM). The model is further developed to include parasitic thermal losses throughout the bed...

  1. Treatment of cadmium dust with two-stage leaching process

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in two stage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

  2. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  3. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  4. Accuracy Improvement for Stiffness Modeling of Parallel Manipulators

    CERN Document Server

    Pashkevich, Anatoly; Chablat, Damien; Wenger, Philippe

    2009-01-01

    The paper focuses on the accuracy improvement of stiffness models for parallel manipulators, which are employed in high-speed precision machining. It is based on the integrated methodology that combines analytical and numerical techniques and deals with multidimensional lumped-parameter models of the links. The latter replace the link flexibility by localized 6-dof virtual springs describing both translational/rotational compliance and the coupling between them. There is presented detailed accuracy analysis of the stiffness identification procedures employed in the commercial CAD systems (including statistical analysis of round-off errors, evaluating the confidence intervals for stiffness matrices). The efficiency of the developed technique is confirmed by application examples, which deal with stiffness analysis of translational parallel manipulators.

  5. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  6. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  7. Load-balancing algorithms for the parallel community climate model

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1995-01-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances resulting from temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the Community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers. The load-balancing library developed in this work is available for use in other climate models.

  8. LOGISTICS SCHEDULING: ANALYSIS OF TWO-STAGE PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yung-Chia CHANG; Chung-Yee LEE

    2003-01-01

    This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Two stages are considered as one system. The system can be a supply chain that links two stages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the two stages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the two stages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, two stages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at two stages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

  9. Exploitation of parallelism in climate models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.

    2001-02-05

    This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.

  10. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    Directory of Open Access Journals (Sweden)

    Yong Xia

    2015-01-01

    Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  11. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... technique in the design procedure is a suitable approach to handle these complex tasks. As there is no unied design guideline for the parallel manipulators, the study described in this thesis aims to provide a systematic analysis for this type of mechanisms in the early design stage, focusing on accuracy...... analysis and design optimization. The proposed approach is illustrated with the planar and spherical parallel manipulators. The geometric design, kinematic and dynamic analysis, kinetostatic modeling and stiness analysis are also presented. Firstly, the study on the geometric architecture and kinematic...

  12. Calibration of parallel kinematics machine using generalized distance error model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.

  13. A continuous two stage solar coal gasification system

    Science.gov (United States)

    Mathur, V. K.; Breault, R. W.; Lakshmanan, S.; Manasse, F. K.; Venkataramanan, V.

    The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

  14. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  15. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  16. A systemic approach for modeling biological evolution using Parallel DEVS.

    Science.gov (United States)

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Solar Impulsive Hard X-Ray Emission and Two-Stage Electron Acceleration

    Institute of Scientific and Technical Information of China (English)

    Tian-Xi Zhang; Arjun Tan; Shi Tsan Wu

    2006-01-01

    Heating and acceleration of electrons in solar impulsive hard X-ray (HXR) flares are studied according to the two-stage acceleration model developed by Zhang for solar 3Herich events. It is shown that electrostatic H-cyclotron waves can be excited at a parallel phase velocity less than about the electron thermal velocity and thus can significantly heat the electrons (up to 40 MK) through Landau resonance. The preheated electrons with velocities above a threshold are further accelerated to high energies in the flare-acceleration process. The flareproduced electron spectrum is obtained and shown to be thermal at low energies and power law at high energies. In the non-thermal energy range, the spectrum can be double power law if the spectral power index is energy dependent or related. The electron energy spectrum obtained by this study agrees quantitatively with the result derived from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) HXR observations in the flare of 2002 July 23. The total flux and energy flux of electrons accelerated in the solar flare also agree with the measurements.

  18. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  19. Ski Control Model for Parallel Turn Using Multibody System

    Science.gov (United States)

    Kawai, Shigehiro; Yamaguchi, Keishi; Sakata, Toshiyuki

    Now, it is possible to discuss qualitatively the effects of skis, skier’s ski control and slope on a ski turn by simulation. The reliability of a simulation depends on the accuracy of the models used in the simulation. In the present study, we attempt to develop a new ski control model for a “parallel turn” using a computer graphics technique. The “ski control” necessary for the simulation is the relative motion of the skier’s center of gravity to the ski and the force acting on the ski from the skier. The developed procedure is as follows. First, the skier is modeled using a multibody system consisting of body parts. Second, various postures of the skier during the “parallel turn” are drawn using a 3D-CAD (three dimensional computer aided design) system referring to the pictures videotaped on a slope. The position of the skier’s center of gravity is estimated from the produced posture. Third, the skier’s ski control is obtained by arranging these postures in a time schedule. One can watch the ski control on a TV. Last, the three types of forces acting on the ski from the skier are estimated from the gravity force and the three relative types of inertia forces acting on the skier. Consequently, one can obtain accurate ski control for the simulation of the “parallel turn”, that is, the relative motion of the skier’s center of gravity to the ski and the force acting on the ski from the skier. Furthermore, it follows that one can numerically estimate the edging angle from the ski control model.

  20. Hybrid fluid/kinetic model for parallel heat conduction

    Energy Technology Data Exchange (ETDEWEB)

    Callen, J.D.; Hegna, C.C.; Held, E.D. [Univ. of Wisconsin, Madison, WI (United States)

    1998-12-31

    It is argued that in order to use fluid-like equations to model low frequency ({omega} < {nu}) phenomena such as neoclassical tearing modes in low collisionality ({nu} < {omega}{sub b}) tokamak plasmas, a Chapman-Enskog-like approach is most appropriate for developing an equation for the kinetic distortion (F) of the distribution function whose velocity-space moments lead to the needed fluid moment closure relations. Further, parallel heat conduction in a long collision mean free path regime can be described through a combination of a reduced phase space Chapman-Enskog-like approach for the kinetics and a multiple-time-scale analysis for the fluid and kinetic equations.

  1. Phase dynamics modeling of parallel stacks of Josephson junctions

    Science.gov (United States)

    Rahmonov, I. R.; Shukrinov, Yu. M.

    2014-11-01

    The phase dynamics of two parallel connected stacks of intrinsic Josephson junctions (JJs) in high temperature superconductors is numerically investigated. The calculations are based on the system of nonlinear differential equations obtained within the CCJJ + DC model, which allows one to determine the general current-voltage characteristic of the system, as well as each individual stack. The processes with increasing and decreasing base currents are studied. The features in the behavior of the current in each stack of the system due to the switching between the states with rotating and oscillating phases are analyzed.

  2. PKind: A parallel k-induction based model checker

    CERN Document Server

    Kahsai, Temesghen; 10.4204/EPTCS.72.6

    2011-01-01

    PKind is a novel parallel k-induction-based model checker of invariant properties for finite- or infinite-state Lustre programs. Its architecture, which is strictly message-based, is designed to minimize synchronization delays and easily accommodate the incorporation of incremental invariant generators to enhance basic k-induction. We describe PKind's functionality and main features, and present experimental evidence that PKind significantly speeds up the verification of safety properties and, due to incremental invariant generation, also considerably increases the number of provable ones.

  3. STARS A Two Stage High Gain Harmonic Generation FEL Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    M. Abo-Bakr; W. Anders; J. Bahrdt; P. Budz; K.B. Buerkmann-Gehrlein; O. Dressler; H.A. Duerr; V. Duerr; W. Eberhardt; S. Eisebitt; J. Feikes; R. Follath; A. Gaupp; R. Goergen; K. Goldammer; S.C. Hessler; K. Holldack; E. Jaeschke; Thorsten Kamps; S. Klauke; J. Knobloch; O. Kugeler; B.C. Kuske; P. Kuske; A. Meseck; R. Mitzner; R. Mueller; M. Neeb; A. Neumann; K. Ott; D. Pfluckhahn; T. Quast; M. Scheer; Th. Schroeter; M. Schuster; F. Senf; G. Wuestefeld; D. Kramer; Frank Marhauser

    2007-08-01

    BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

  4. Parallel tempering and 3D spin glass models

    Science.gov (United States)

    Papakonstantinou, T.; Malakis, A.

    2014-03-01

    We review parallel tempering schemes and examine their main ingredients for accuracy and efficiency. We discuss two selection methods of temperatures and some alternatives for the exchange of replicas, including all-pair exchange methods. We measure specific heat errors and round-trip efficiency using the two-dimensional (2D) Ising model, and also test the efficiency for the ground state production in 3D spin glass models. We find that the optimization of the GS problem is highly influenced by the choice of the temperature range of the PT process. Finally, we present numerical evidence concerning the universality aspects of an anisotropic case of the 3D spin-glass model.

  5. The parallel network dynamic DEA model with interval data

    Directory of Open Access Journals (Sweden)

    S. Keikha-Javan

    2014-09-01

    Full Text Available In original DEA models, data apply precisely for measuring the relative efficiency whereas in reality, we do not always deal with precise data, also, be noted that when data are non-precision, it is expected to attain non-precision efficiency due to these data. In this article, we apply the parallel network dynamic DEA model for non-precision data in which the carry-overs among periods are assumed as desired and undesired. Then Upper and lower efficiency bounds are obtained for overall-, periodical-, divisional and periodical efficiencies the part which is computed considering the subunits of DMU under evaluation. Finally, having exerted this model on data set of branches of several banks in Iran, we compute the efficiency interval.

  6. Methods to model-check parallel systems software.

    Energy Technology Data Exchange (ETDEWEB)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-12-15

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD.

  7. A Reconfigurable Architecture for Rotation Invariant Multi-View Face Detection Based on a Novel Two-Stage Boosting Method

    Directory of Open Access Journals (Sweden)

    Zhengbin Pang

    2009-01-01

    Full Text Available We present a reconfigurable architecture model for rotation invariant multi-view face detection based on a novel two-stage boosting method. A tree-structured detector hierarchy is designed to organize multiple detector nodes identifying pose ranges of faces. We propose a boosting algorithm for training the detector nodes. The strong classifier in each detector node is composed of multiple novelly designed two-stage weak classifiers. With a shared output space of multicomponents vector, each detector node deals with the multidimensional binary classification problems. The design of the hardware architecture which fully exploits the spatial and temporal parallelism is introduced in detail. We also study the reconfiguration of the architecture for finding an appropriate tradeoff among the hardware implementation cost, the detection accuracy, and speed. Experiments on FPGA show that high accuracy and marvelous speed are achieved compared with previous related works. The execution time speedups range from 14.68 to 20.86 for images with size of 160×120 up to 800×600 when our FPGA design (98 MHz is compared with software solution on PC (Pentium 4 2.8 GHz.

  8. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  9. Energy consumption model over parallel programs implemented on multicore architectures

    Directory of Open Access Journals (Sweden)

    Ricardo Isidro-Ramirez

    2015-06-01

    Full Text Available In High Performance Computing, energy consump-tion is becoming an important aspect to consider. Due to the high costs that represent energy production in all countries it holds an important role and it seek to find ways to save energy. It is reflected in some efforts to reduce the energy requirements of hardware components and applications. Some options have been appearing in order to scale down energy use and, con-sequently, scale up energy efficiency. One of these strategies is the multithread programming paradigm, whose purpose is to produce parallel programs able to use the full amount of computing resources available in a microprocessor. That energy saving strategy focuses on efficient use of multicore processors that are found in various computing devices, like mobile devices. Actually, as a growing trend, multicore processors are found as part of various specific purpose computers since 2003, from High Performance Computing servers to mobile devices. However, it is not clear how multiprogramming affects energy efficiency. This paper presents an analysis of different types of multicore-based architectures used in computing, and then a valid model is presented. Based on Amdahl’s Law, a model that considers different scenarios of energy use in multicore architectures it is proposed. Some interesting results were found from experiments with the developed algorithm, that it was execute of a parallel and sequential way. A lower limit of energy consumption was found in a type of multicore architecture and this behavior was observed experimentally.

  10. Efficient Two-Stage Group Testing Algorithms for DNA Screening

    CERN Document Server

    Huber, Michael

    2011-01-01

    Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

  11. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

  12. FREE GRAFT TWO-STAGE URETHROPLASTY FOR HYPOSPADIAS REPAIR

    Institute of Scientific and Technical Information of China (English)

    Zhong-jin Yue; Ling-jun Zuo; Jia-ji Wang; Gan-ping Zhong; Jian-ming Duan; Zhi-ping Wang; Da-shan Qin

    2005-01-01

    Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

  13. A two-stage rank test using density estimation

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1995-01-01

    For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

  14. Parallel multiscale modeling of biopolymer dynamics with hydrodynamic correlations

    CERN Document Server

    Fyta, Maria; Kaxiras, Efthimios; Melchionna, Simone; Bernaschi, Massimo; Succi, Sauro

    2007-01-01

    We employ a multiscale approach to model the translocation of biopolymers through nanometer size pores. Our computational scheme combines microscopic Molecular Dynamics (MD) with a mesoscopic Lattice Boltzmann (LB) method for the solvent dynamics, explicitly taking into account the interactions of the molecule with the surrounding fluid. We describe an efficient parallel implementation of the method which exhibits excellent scalability on the Blue Gene platform. We investigate both dynamical and statistical aspects of the translocation process by simulating polymers of various initial configurations and lengths. For a representative molecule size, we explore the effects of important parameters that enter in the simulation, paying particular attention to the strength of the molecule-solvent coupling and of the external electric field which drives the translocation process. Finally, we explore the connection between the generic polymers modeled in the simulation and DNA, for which interesting recent experimenta...

  15. Applying the Extended Parallel Process Model to workplace safety messages.

    Science.gov (United States)

    Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M

    2013-01-01

    The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.

  16. Parallel Application Development Using Architecture View Driven Model Transformations

    NARCIS (Netherlands)

    Arkin, E.; Tekinerdogan, B.

    2015-01-01

    o realize the increased need for computing performance the current trend is towards applying parallel computing in which the tasks are run in parallel on multiple nodes. On its turn we can observe the rapid increase of the scale of parallel computing platforms. This situation has led to a complexity

  17. Method of oxygen-enriched two-stage underground coal gasification

    Institute of Scientific and Technical Information of China (English)

    Liu Hongtao; Chen Feng; Pan Xia; Yao Kai; Liu Shuqin

    2011-01-01

    Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the two stages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

  18. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  19. Analysis of a Delayed Epidemic Model with Two Stage-Structure and Saturation Incidence%具饱和传染率和时滞两阶段结构的传染病模型

    Institute of Scientific and Technical Information of China (English)

    曹瑾; 唐蕾; 武佳; 崔然

    2013-01-01

      A SIS Epidemic model with saturation incidence and two stage‐structure is discussed in this paper .Using the discrete dynamical system determined by the stroboscopic map ,the threshold is obtained . If the threshold less than one ,sufficient condition for global asymptotic stability of the infection‐free equilib‐rium is obtained ,Moreover ,we show that the endemic equilibrium is local asymptotic stability and perma‐nence if the threshold is larger than one .%  讨论了一类具饱和传染率和时滞两阶段结构传染病模型,利用离散动力系统频闪映射理论,得到了传染病最终消除和成为地方病的阈值,当它小于1时,无病平衡点是全局渐近稳定的,此时疾病消除。当它大于1时,地方病平衡点是局部渐近稳定的,此时传染病成为地方病。

  20. Tarmo: A Framework for Parallelized Bounded Model Checking

    CERN Document Server

    Wieringa, Siert; Heljanko, Keijo; 10.4204/EPTCS.14.5

    2009-01-01

    This paper investigates approaches to parallelizing Bounded Model Checking (BMC) for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a ...

  1. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  2. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  3. Requirements and Problems in Parallel Model Development at DWD

    Directory of Open Access Journals (Sweden)

    Ulrich Schäattler

    2000-01-01

    Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.

  4. Dynamic modeling of Tampa Bay urban development using parallel computing

    Science.gov (United States)

    Xian, G.; Crane, M.; Steinwand, D.

    2005-01-01

    Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively. ?? 2005 Elsevier Ltd. All rights reserved.

  5. "Let's Move" campaign: applying the extended parallel process model.

    Science.gov (United States)

    Batchelder, Alicia; Matusitz, Jonathan

    2014-01-01

    This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."

  6. Parallel imaging enhanced MR colonography using a phantom model.

    LENUS (Irish Health Repository)

    Morrin, Martina M

    2008-09-01

    To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.

  7. Parallel Semi-Implicit Spectral Element Atmospheric Model

    Science.gov (United States)

    Fournier, A.; Thomas, S.; Loft, R.

    2001-05-01

    The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI

  8. Square Kilometre Array station configuration using two-stage beamforming

    CERN Document Server

    Jiwani, Aziz; Razavi-Ghods, Nima; Hall, Peter J; Padhi, Shantanu; de Vaate, Jan Geralt bij

    2012-01-01

    The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

  9. Two stage sorption type cryogenic refrigerator including heat regeneration system

    Science.gov (United States)

    Jones, Jack A.; Wen, Liang-Chi; Bard, Steven

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  10. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

  11. Development of a heavy-duty diesel engine with two-stage turbocharging

    NARCIS (Netherlands)

    Sturm, L.; Kruithof, J.

    2001-01-01

    A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

  12. Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals

    Science.gov (United States)

    John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe

    1985-01-01

    This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

  13. Multiphysics & Parallel Kinematics Modeling of a 3DOF MEMS Mirror

    Directory of Open Access Journals (Sweden)

    Mamat N.

    2015-01-01

    Full Text Available This paper presents a modeling for a 3DoF electrothermal actuated micro-electro-mechanical (MEMS mirror used to achieve scanning for optical coherence tomography (OCT imaging. The device is integrated into an OCT endoscopic probe, it is desired that the optical scanner have small footprint for minimum invasiveness, large and flat optical aperture for large scanning range, low driving voltage and low power consumption for safety reason. With a footprint of 2mm×2mm, the MEMS scanner which is also called as Tip-Tilt-Piston micro-mirror, can perform two rotations around x and y-axis and a vertical translation along z-axis. This work develops a complete model and experimental characterization. The modeling is divided into two parts: multiphysics characterization of the actuators and parallel kinematics studies of the overall system. With proper experimental procedures, we are able to validate the model via Visual Servoing Platform (ViSP. The results give a detailed overview on the performance of the mirror platform while varying the applied voltage at a stable working frequency. The paper also presents a discussion on the MEMS control system based on several scanning trajectories.

  14. A two-stage model for blog feed search

    NARCIS (Netherlands)

    Weerkamp, W.; Balog, K.; de Rijke, M.

    2010-01-01

    We consider blog feed search: identifying relevant blogs for a given topic. An individual's search behavior often involves a combination of exploratory behavior triggered by salient features of the information objects being examined plus goal-directed in-depth information seeking behavior. We presen

  15. Optimizing electricity distribution using two-stage integer recourse models

    NARCIS (Netherlands)

    Klein Haneveld, W.K.; van der Vlerk, Maarten H.

    2000-01-01

    We consider two planning problems faced by an electricity distributor. Electricity can be ob-tained both from power plants and small generators such as hospitals and greenhouses, whereas the future demand for electricity is uncertain. The price of electricity obtained from the power plants depends o

  16. Optimizing electricity distribution using two-stage integer recourse models

    NARCIS (Netherlands)

    Klein Haneveld, W.K.; van der Vlerk, M.H.; Uryasev, SP; Pardalos, PM

    2001-01-01

    We consider two planning problems faced by an electricity distributor. Electricity can be obtained both from power plants and small generators such as hospitals and greenhouses, whereas the future demand for electricity is uncertain. The price of electricity obtained from the power plants depends on

  17. A two-stage model for blog feed search

    NARCIS (Netherlands)

    Weerkamp, W.; Balog, K.; de Rijke, M.

    2010-01-01

    We consider blog feed search: identifying relevant blogs for a given topic. An individual's search behavior often involves a combination of exploratory behavior triggered by salient features of the information objects being examined plus goal-directed in-depth information seeking behavior. We

  18. Optimizing electricity distribution using two-stage integer recourse models

    Energy Technology Data Exchange (ETDEWEB)

    Klein Haneveld, W.K.; Van der Vlerk, M.H.

    2000-05-01

    We consider two planning problems faced by an electricity distributor. Electricity can be obtained both from power plants and small generators such as hospitals and greenhouses, whereas the future demand for electricity is uncertain. The price of electricity obtained from the power plants depends on quota that are to be determined in a yearly contract, whereas the (given) contracts with small generators contain various constraints on switching them on or off. 14 refs.

  19. A model for dealing with parallel processes in supervision

    Directory of Open Access Journals (Sweden)

    Lilja Cajvert

    2011-03-01

    Supervision in social work is essential for successful outcomes when working with clients. In social work, unconscious difficulties may arise and similar difficulties may occur in supervision as parallel processes. In this article, the development of a practice-based model of supervision to deal with parallel processes in supervision is described. The model has six phases. In the first phase, the focus is on the supervisor’s inner world, his/her own reflections and observations. In the second phase, the supervision situation is “frozen”, and the supervisees are invited to join the supervisor in taking a meta-perspective on the current situation of supervision. The focus in the third phase is on the inner world of all the group members as well as the visualization and identification of reflections and feelings that arose during the supervision process. Phase four focuses on the supervisee who presented a case, and in phase five the focus shifts to the common understanding and theorization of the supervision process as well as the definition and identification of possible parallel processes. In the final phase, the supervisee, with the assistance of the supervisor and other members of the group, develops a solution and determines how to proceed with the client in treatment. This article uses phenomenological concepts to provide a theoretical framework for the supervision model. Phenomenological reduction is an important approach to examine and to externalize and visualize the inner words of the supervisor and supervisees. Een model voor het hanteren van parallelle processen tijdens supervisie Om succesvol te zijn in de hulpverlening aan cliënten, is supervisie cruciaal in het sociaal werk. Tijdens de hulpverlening kunnen impliciete moeilijkheden de kop opsteken en soortgelijke moeilijkheden duiken soms ook op tijdens supervisie. Dit worden parallelle processen genoemd. Dit artikel beschrijft een op praktijkervaringen gebaseerd model om dergelijke parallelle

  20. Measuring the Learning from Two-Stage Collaborative Group Exams

    CERN Document Server

    Ives, Joss

    2014-01-01

    A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

  1. Parallel processing for efficient 3D slope stability modelling

    Science.gov (United States)

    Marchesini, Ivan; Mergili, Martin; Alvioli, Massimiliano; Metz, Markus; Schneider-Muntau, Barbara; Rossi, Mauro; Guzzetti, Fausto

    2014-05-01

    We test the performance of the GIS-based, three-dimensional slope stability model r.slope.stability. The model was developed as a C- and python-based raster module of the GRASS GIS software. It considers the three-dimensional geometry of the sliding surface, adopting a modification of the model proposed by Hovland (1977), and revised and extended by Xie and co-workers (2006). Given a terrain elevation map and a set of relevant thematic layers, the model evaluates the stability of slopes for a large number of randomly selected potential slip surfaces, ellipsoidal or truncated in shape. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a value of the factor of safety, FS. For each pixel, the minimum value of FS and the depth of the associated slip surface are stored. This information is used to obtain a spatial overview of the potentially unstable slopes in the study area. We test the model in the Collazzone area, Umbria, central Italy, an area known to be susceptible to landslides of different type and size. Availability of a comprehensive and detailed landslide inventory map allowed for a critical evaluation of the model results. The r.slope.stability code automatically splits the study area into a defined number of tiles, with proper overlap in order to provide the same statistical significance for the entire study area. The tiles are then processed in parallel by a given number of processors, exploiting a multi-purpose computing environment at CNR IRPI, Perugia. The map of the FS is obtained collecting the individual results, taking the minimum values on the overlapping cells. This procedure significantly reduces the processing time. We show how the gain in terms of processing time depends on the tile dimensions and on the number of cores.

  2. Tarmo: A Framework for Parallelized Bounded Model Checking

    Directory of Open Access Journals (Sweden)

    Siert Wieringa

    2009-12-01

    Full Text Available This paper investigates approaches to parallelizing Bounded Model Checking (BMC for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a shared clause database that allows easy clause sharing between SAT solver threads solving various such instances. We present extensive experimental results obtained with multiple variants of our Tarmo implementation. Our shared memory variants have a significantly better performance than conventional single threaded approaches, which is a result that many users can benefit from as multi-core and multi-processor technology is widely available. Furthermore we demonstrate that our framework can be deployed in a typical cluster of workstations, where several multi-core machines are connected by a network.

  3. Forty-five-degree two-stage venous cannula: advantages over standard two-stage venous cannulation.

    Science.gov (United States)

    Lawrence, D R; Desai, J B

    1997-01-01

    We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

  4. An intracooling system for a novel two-stage sliding-vane air compressor

    Science.gov (United States)

    Murgia, Stefano; Valenti, Gianluca; Costanzo, Ida; Colletta, Daniele; Contaldi, Giulio

    2017-08-01

    Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the two stages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

  5. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Science.gov (United States)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  6. Parallel family trees for transfer matrices in the Potts model

    CERN Document Server

    Navarro, Cristobal A; Kahler, Nancy Hitschfeld; Navarro, Gonzalo

    2013-01-01

    The computational cost of transfer matrix methods for the Potts model is directly related to the problem of \\textit{into how many ways can two adjacent blocks of a lattice be connected}. Answering this question leads to the generation of a combinatorial set of lattice configurations. This set defines the \\textit{configuration space} of the problem, and the smaller it is, the faster the transfer matrix method can be. The configuration space of generic transfer matrix methods for strip lattices in the Potts model is in the order of the Catalan numbers, leading to an asymptotic cost of $O(4^m)$ with $m$ being the width of the strip. Transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the topology of the lattice in order to work. In this paper we propose a general and parallel transfer matrix method, based on family trees, that uses a sub-Catalan configuration space of size $O(3^m)$. The improvement is achieved by...

  7. Time efficient 3-D electromagnetic modeling on massively parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1995-08-01

    A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.

  8. On Two-stage Seamless Adaptive Design in Clinical Trials

    Directory of Open Access Journals (Sweden)

    Shein-Chung Chow

    2008-12-01

    Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the two stages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

  9. The Rochester Checkers Player: Multi-Model Parallel Programming for Animate Vision

    Science.gov (United States)

    1991-06-01

    parallel programming is likely to serve for all tasks, however. Early vision algorithms are intensely data parallel, often utilizing fine-grain parallel computations that share an image, while cognition algorithms decompose naturally by function, often consisting of loosely-coupled, coarse-grain parallel units. A typical animate vision application will likely consist of many tasks, each of which may require a different parallel programming model, and all of which must cooperate to achieve the desired behavior. These multi-model programs require an

  10. Two-stage series array SQUID amplifier for space applications

    Science.gov (United States)

    Tuttle, J. G.; DiPirro, M. J.; Shirron, P. J.; Welty, R. P.; Radparvar, M.

    We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

  11. A Two Stage Classification Approach for Handwritten Devanagari Characters

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Malik, Latesh

    2010-01-01

    The paper presents a two stage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

  12. Straw Gasification in a Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

  13. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  14. Two-Stage Eagle Strategy with Differential Evolution

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

  15. Selecting Simulation Models when Predicting Parallel Program Behaviour

    OpenAIRE

    Broberg, Magnus; Lundberg, Lars; Grahn, Håkan

    2002-01-01

    The use of multiprocessors is an important way to increase the performance of a supercom-puting program. This means that the program has to be parallelized to make use of the multi-ple processors. The parallelization is unfortunately not an easy task. Development tools supporting parallel programs are important. Further, it is the customer that decides the number of processors in the target machine, and as a result the developer has to make sure that the pro-gram runs efficiently on any numbe...

  16. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2017-04-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  17. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2016-09-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  18. Optimisation of two-stage screw expanders for waste heat recovery applications

    Science.gov (United States)

    Read, M. G.; Smith, I. K.; Stosic, N.

    2015-08-01

    It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over two stages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

  19. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  20. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  1. Improved modelling of a parallel plate active magnetic regenerator

    DEFF Research Database (Denmark)

    Engelbrecht, Kurt; Tušek, J.; Nielsen, Kaspar Kirstein;

    2013-01-01

    flow maldistribution in the regenerator. This paper studies the effects of these loss mechanisms and compares theoretical results with experimental results obtained on an experimental AMR device. Three parallel plate regenerators were tested, each having different demagnetizing field characteristics...

  2. Parallel implementation of approximate atomistic models of the AMOEBA polarizable model

    Science.gov (United States)

    Demerdash, Omar; Head-Gordon, Teresa

    2016-11-01

    In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.

  3. Parallell Implementations of Modeling Dynamical Systems by Using System of Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    Cao Hong-qing; Kang Li-shan; Yu Jing-xian

    2003-01-01

    First, an asynchronous distributed parallel evolutionary modeling algorithm (PEMA) for building the model of system of ordinary differential equations for dynamical systems is proposed in this paper. Then a series of parallel experiments have been conducted to systematically test the influence of some important parallel control parameters on the performance of the algorithm. A lot of experimental results are obtained and we make some analysis and explanations to them.

  4. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  5. Laparoscopic management of a two staged gall bladdertorsion

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Gall bladder torsion (GBT) is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a two staged torsion of thegall bladder with twisting of the Riedel's lobe (partof tongue like projection of liver segment 4A). Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot's triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

  6. Lightweight Concrete Produced Using a Two-Stage Casting Process

    Directory of Open Access Journals (Sweden)

    Jin Young Yoon

    2015-03-01

    Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

  7. TWO-STAGE OCCLUDED OBJECT RECOGNITION METHOD FOR MICROASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    WANG Huaming; ZHU Jianying

    2007-01-01

    A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

  8. Two-stage designs for cross-over bioequivalence trials.

    Science.gov (United States)

    Kieser, Meinhard; Rauch, Geraldine

    2015-07-20

    The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

  9. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  10. Coupled Models and Parallel Simulations for Three-Dimensional Full-Stokes Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Huai; Ju, Lili

    2011-01-01

    A three-dimensional full-Stokes computational model is considered for determining the dynamics, temperature, and thickness of ice sheets. The governing thermomechanical equations consist of the three-dimensional full-Stokes system with nonlinear rheology for the momentum, an advective-diffusion energy equation for temperature evolution, and a mass conservation equation for icethickness changes. Here, we discuss the variable resolution meshes, the finite element discretizations, and the parallel algorithms employed by the model components. The solvers are integrated through a well-designed coupler for the exchange of parametric data between components. The discretization utilizes high-quality, variable-resolution centroidal Voronoi Delaunay triangulation meshing and existing parallel solvers. We demonstrate the gridding technology, discretization schemes, and the efficiency and scalability of the parallel solvers through computational experiments using both simplified geometries arising from benchmark test problems and a realistic Greenland ice sheet geometry.

  11. Two Stage Assessment of Thermal Hazard in An Underground Mine

    Science.gov (United States)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  12. Matching tutor to student: rules and mechanisms for efficient two-stage learning in neural circuits

    CERN Document Server

    Tesileanu, Tiberiu; Balasubramanian, Vijay

    2016-01-01

    Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  13. Hybrid parallel execution model for logic-based specification languages

    CERN Document Server

    Tsai, Jeffrey J P

    2001-01-01

    Parallel processing is a very important technique for improving the performance of various software development and maintenance activities. The purpose of this book is to introduce important techniques for parallel executation of high-level specifications of software systems. These techniques are very useful for the construction, analysis, and transformation of reliable large-scale and complex software systems. Contents: Current Approaches; Overview of the New Approach; FRORL Requirements Specification Language and Its Decomposition; Rewriting and Data Dependency, Control Flow Analysis of a Lo

  14. Mobile Parallel Manipulators, Modelling and Data-Driven Motion Planning

    Directory of Open Access Journals (Sweden)

    Amar Khoukhi

    2013-11-01

    Full Text Available This paper provides a kinematic and dynamic analysis of mobile parallel manipulators (MPM. The study is conducted on a composed multi-degree of freedom (DOF parallel robot carried by a wheeled mobile platform. Both positional and differential kinematics problems for the hybrid structure are solved, and the redundancy problem is solved using joint limit secondary criterion- based generalized-pseudo-inverse. A minimum time trajectory parameterization is obtained via cycloidal profile to initialize multi-objective trajectory planning of the MPM. Considered objectives include time energy minimization redundancy resolution and singularity avoidance. Simulation results illustrating the effectiveness of the proposed approach are presented and discussed.

  15. Hybrid staging of a Lysholm positive displacement engine with two Westinghouse two stage impulse Curtis turbines

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D.A.

    1982-06-01

    The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a two stage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

  16. Noncausal two-stage image filtration at presence of observations with anomalous errors

    Directory of Open Access Journals (Sweden)

    S. V. Vishnevyy

    2013-04-01

    Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

  17. Stiffness Model of a 3-DOF Parallel Manipulator with Two Additional Legs

    Directory of Open Access Journals (Sweden)

    Guang Yu

    2014-10-01

    Full Text Available This paper investigates the stiffness modelling of a 3-DOF parallel manipulator with two additional legs. The stiffness model in six directions of the 3-DOF parallel manipulator with two additional legs is derived by performing condensation of DOFs for the joint connection and treatment of the fixed-end connections. Moreover, this modelling method is used to derive the stiffness model of the manipulator with zero/one additional legs. Two performance indices are given to compare the stiffness of the parallel manipulators with two additional legs with those of the manipulators with zero/one additional legs. The method not only can be used to derive the stiffness model of a redundant parallel manipulator, but also to model the stiffness of non-redundant parallel manipulators.

  18. Pi: A Parallel Architecture Interface for Multi-Model Execution

    Science.gov (United States)

    1990-07-01

    Directory Schemes for Cache Coherence. In The 15th Annual Interna- tional Symposium on Computer Architecture. IEEE Computer Society and ACM, June 1988. [3...Annual International Symposium on Computer Architecture. IEEE Computer Society and ACM, June 1986. [5] Arvind and Rishiyur S. Nikhil. A Dataflow...Overview, 1987. [9] Roberto Bisiani and Alessandro Forin. Multilanguage Parallel Programming of Heterogeneous Machines. IEEE Transactions on Computers

  19. Differentiating the persistency and permanency of some two stages DNA splicing language via Yusof-Goode (Y-G) approach

    Science.gov (United States)

    Mudaber, M. H.; Yusof, Y.; Mohamad, M. S.

    2017-09-01

    Predicting the existence of restriction enzymes sequences on the recombinant DNA fragments, after accomplishing the manipulating reaction, via mathematical approach is considered as a convenient way in terms of DNA recombination. In terms of mathematics, for this characteristic of the recombinant DNA strands, which involve the recognition sites of restriction enzymes, is called persistent and permanent. Normally differentiating the persistency and permanency of two stages recombinant DNA strands using wet-lab experiment is expensive and time-consuming due to running the experiment at two stages as well as adding more restriction enzymes on the reaction. Therefore, in this research, by using Yusof-Goode (Y-G) model the difference between persistent and permanent splicing language of some two stages is investigated. Two theorems were provided, which show the persistency and non-permanency of two stages DNA splicing language.

  20. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  1. Power Frequency Oscillation Suppression Using Two-Stage Optimized Fuzzy Logic Controller for Multigeneration System

    Directory of Open Access Journals (Sweden)

    Y. K. Bhateshvar

    2016-01-01

    Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at two stages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

  2. Direct Torque Control of Sensorless Induction Machine Drives: A Two-Stage Kalman Filter Approach

    Directory of Open Access Journals (Sweden)

    Jinliang Zhang

    2015-01-01

    Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

  3. Experiment and surge analysis of centrifugal two-stage turbocharging system

    Institute of Scientific and Technical Information of China (English)

    Yituan HE; Chaochen MA

    2008-01-01

    To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

  4. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  5. Dynamics of installation way for the actuator of a two-stage active vibration-isolator

    Institute of Scientific and Technical Information of China (English)

    HU Li; HUANG Qi-bai; HE Xue-song; YUAN Ji-xuan

    2008-01-01

    We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

  6. A two-stage Stirling-type pulse tube cryocooler with a cold inertance tube

    Science.gov (United States)

    Gan, Z. H.; Fan, B. Y.; Wu, Y. Z.; Qiu, L. M.; Zhang, X. J.; Chen, G. B.

    2010-06-01

    A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

  7. Characterization of component interactions in two-stage axial turbine

    Directory of Open Access Journals (Sweden)

    Adel Ghenaiet

    2016-08-01

    Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

  8. Characterization of component interactions in two-stage axial turbine

    Institute of Scientific and Technical Information of China (English)

    Adel Ghenaiet; Kaddour Touil

    2016-01-01

    This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

  9. Two stages kinetics of municipal solid waste inoculation composting processes

    Institute of Scientific and Technical Information of China (English)

    XI Bei-dou1; HUANG Guo-he; QIN Xiao-sheng; LIU Hong-liang

    2004-01-01

    In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into two stages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

  10. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  11. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    Science.gov (United States)

    2016-03-01

    not for the PQN method. For the latter method the size of the active set is controlled to promote sparse solutions. This is described in Section 3.2.1...or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington...21005-5066 primary author’s email: <james.p.collins106.civ@mail.mil>. Parallel nonnegative least squares (NNLS) solvers are developed specifically for

  12. Generalized Yule-walker and two-stage identification algorithms for dual-rate systems

    Institute of Scientific and Technical Information of China (English)

    Feng DING

    2006-01-01

    In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

  13. Development of Two-Stage Stirling Cooler for ASTRO-F

    Science.gov (United States)

    Narasaki, K.; Tsunematsu, S.; Ootsuka, K.; Kyoya, M.; Matsumoto, T.; Murakami, H.; Nakagawa, T.

    2004-06-01

    A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

  14. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  15. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    Science.gov (United States)

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  16. PERFORMANCE STUDY OF A TWO STAGE SOLAR ADSORPTION REFRIGERATION SYSTEM

    Directory of Open Access Journals (Sweden)

    BAIJU. V

    2011-07-01

    Full Text Available The present study deals with the performance of a two stage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

  17. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  18. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components...... developed in order to decompose the different types of geometrical errors into 6 elementary cases. Deliberate introduction of errors to the virtual machine has subsequently allowed for the generation of deviation plots that can be used as a strong tool for the identification and correction of geometrical...... errors on a physical machine tool....

  19. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  20. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  1. The global stability of a delayed predator-prey system with two stage-structure

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fengyan [College of Science, Jimei University, Xiamen Fujian 361021 (China)], E-mail: wangfy68@163.com; Pang Guoping [Department of Mathematics and Computer Science, Yulin Normal University, Yulin Guangxi 537000 (China)

    2009-04-30

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  2. Comparison of Nonmem 7.2 estimation methods and parallel processing efficiency on a target-mediated drug disposition model.

    Science.gov (United States)

    Gibiansky, Leonid; Gibiansky, Ekaterina; Bauer, Robert

    2012-02-01

    The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.

  3. Efficiently parallelized modeling of tightly focused, large bandwidth laser pulses

    CERN Document Server

    Dumont, Joey; Lefebvre, Catherine; Gagnon, Denis; MacLean, Steve

    2016-01-01

    The Stratton-Chu integral representation of electromagnetic fields is used to study the spatio-temporal properties of large bandwidth laser pulses focused by high numerical aperture mirrors. We review the formal aspects of the derivation of diffraction integrals from the Stratton-Chu representation and discuss the use of the Hadamard finite part in the derivation of the physical optics approximation. By analyzing the formulation we show that, for the specific case of a parabolic mirror, the integrands involved in the description of the reflected field near the focal spot do not possess the strong oscillations characteristic of diffraction integrals. Consequently, the integrals can be evaluated with simple and efficient quadrature methods rather than with specialized, more costly approaches. We report on the development of an efficiently parallelized algorithm that evaluates the Stratton-Chu diffraction integrals for incident fields of arbitrary temporal and spatial dependence. We use our method to show that t...

  4. Experimental and modelling results of a parallel-plate based active magnetic regenerator

    DEFF Research Database (Denmark)

    Tura, A.; Nielsen, Kaspar Kirstein; Rowe, A.

    2012-01-01

    The performance of a permanent magnet magnetic refrigerator (PMMR) using gadolinium parallel plates is described. The configuration and operating parameters are described in detail. Experimental results are compared to simulations using an established twodimensional model of an active magnetic...

  5. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  6. Right Axillary Sweating After Left Thoracoscopic Sypathectomy in Two-Stage Surgery

    Directory of Open Access Journals (Sweden)

    Berkant Ozpolat

    2013-06-01

    Full Text Available One stage bilateral or two stage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a two stage operation.

  7. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  8. Metamodeling and Optimization of a Blister Copper Two-Stage Production Process

    Science.gov (United States)

    Jarosz, Piotr; Kusiak, Jan; Małecki, Stanisław; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2016-06-01

    It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of two stages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

  9. Two-stage estimation for multivariate recurrent event data with a dependent terminal event.

    Science.gov (United States)

    Chen, Chyong-Mei; Chuang, Ya-Wen; Shen, Pao-Sheng

    2015-03-01

    Recurrent event data arise in longitudinal follow-up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram-positive and non-Gram-positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter-recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two-stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two-stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Two-stage image segmentation based on edge and region information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at two stages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

  11. Efficient parallel Levenberg-Marquardt model fitting towards real-time automated parametric imaging microscopy.

    Science.gov (United States)

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy.

  12. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    OpenAIRE

    Xiang Zhu; Dianwen Zhang

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetim...

  13. Describing, using 'recognition cones'. [parallel-series model with English-like computer program

    Science.gov (United States)

    Uhr, L.

    1973-01-01

    A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.

  14. Partitioning and packing mathematical simulation models for calculation on parallel computers

    Science.gov (United States)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  15. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  16. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    OpenAIRE

    Ladan Jamshidy; Hamid Reza Mozaffari; Payam Faraji; Roohollah Sharifi

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

  17. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    Science.gov (United States)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  18. Two-Stage Exams Improve Student Learning in an Introductory Geology Course: Logistics, Attendance, and Grades

    Science.gov (United States)

    Knierim, Katherine; Turner, Henry; Davis, Ralph K.

    2015-01-01

    Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

  19. Efficiently parallelized modeling of tightly focused, large bandwidth laser pulses

    Science.gov (United States)

    Dumont, Joey; Fillion-Gourdeau, François; Lefebvre, Catherine; Gagnon, Denis; MacLean, Steve

    2017-02-01

    The Stratton-Chu integral representation of electromagnetic fields is used to study the spatio-temporal properties of large bandwidth laser pulses focused by high numerical aperture mirrors. We review the formal aspects of the derivation of diffraction integrals from the Stratton-Chu representation and discuss the use of the Hadamard finite part in the derivation of the physical optics approximation. By analyzing the formulation we show that, for the specific case of a parabolic mirror, the integrands involved in the description of the reflected field near the focal spot do not possess the strong oscillations characteristic of diffraction integrals. Consequently, the integrals can be evaluated with simple and efficient quadrature methods rather than with specialized, more costly approaches. We report on the development of an efficiently parallelized algorithm that evaluates the Stratton-Chu diffraction integrals for incident fields of arbitrary temporal and spatial dependence. This method has the advantage that its input is the unfocused field coming from the laser chain, which is experimentally known with high accuracy. We use our method to show that the reflection of a linearly polarized Gaussian beam of femtosecond duration off a high numerical aperture parabolic mirror induces ellipticity in the dominant field components and generates strong longitudinal components. We also estimate that future high-power laser facilities may reach intensities of {10}24 {{W}} {{cm}}-2.

  20. Parallel Lagrangian models for turbulent transport and chemistry

    NARCIS (Netherlands)

    Crone, Gilia Cornelia

    1997-01-01

    In this thesis we give an overview of recent stochastic Lagrangian models and present a new particle model for turbulent dispersion and chemical reactions. Our purpose is to investigate and assess the feasibility of the Lagrangian approach for modelling the turbulent dispersion and chemistry

  1. Parallel Development of Products and New Business Models

    DEFF Research Database (Denmark)

    Lund, Morten; Hansen, Poul H. Kyvsgård

    2014-01-01

    The perception of product development and the practical execution of product development in professional organizations have undergone dramatic changes in recent years. Many of these chances relate to introduction of broader and more cross-disciplinary views that involves new organizational functi...... and innovation management the 4th generation models are increasingly including the concept business models and business model innovation....

  2. [Parallel PLS algorithm using MapReduce and its aplication in spectral modeling].

    Science.gov (United States)

    Yang, Hui-Hua; Du, Ling-Ling; Li, Ling-Qiao; Tang, Tian-Biao; Guo, Tuo; Liang, Qiong-Lin; Wang, Yi-Ming; Luo, Guo-An

    2012-09-01

    Partial least squares (PLS) has been widely used in spectral analysis and modeling, and it is computation-intensive and time-demanding when dealing with massive data To solve this problem effectively, a novel parallel PLS using MapReduce is proposed, which consists of two procedures, the parallelization of data standardizing and the parallelization of principal component computing. Using NIR spectral modeling as an example, experiments were conducted on a Hadoop cluster, which is a collection of ordinary computers. The experimental results demonstrate that the parallel PLS algorithm proposed can handle massive spectra, can significantly cut down the modeling time, and gains a basically linear speedup, and can be easily scaled up.

  3. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  4. LARGE SIGNAL DISCRETE-TIME MODEL FOR PARALLELED BUCK CONVERTERS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    As a number of switch-combinations are involved in operation of multi-converter-system, conventional methods for obtaining discrete-time large signal model of these converter systems result in a very complex solution. A simple sampled-data technique for modeling distributed dc-dc PWM converters system (DCS) was proposed. The resulting model is nonlinear and can be linearized for analysis and design of DCS. These models are also suitable for fast simulation of these networks. As the input and output of dc-dc converters are slow varying, suitable model for DCS was obtained in terms of the finite order input/output approximation.

  5. Efficiency Analysis of Agricultural Listed Companies with Concept of Land Circulation Based on DEA-Tobit Two-stage Model%土地流转概念板块农业上市企业效率研究--基于 DEA-Tobit 两阶段模型

    Institute of Scientific and Technical Information of China (English)

    吴未双; 刘文

    2015-01-01

    We analyzed the operational efficiency and its influencing factors of ten agricultural listed companies with the concept of land circulation during 2010~2013 based on the DEA-Tobit two-stage model, and found that: the overall average com-prehensive efficiency value of these companies was relatively high , but there were differences in the efficiency among various compa -nies; the main business growth rate and total asset turnover had significantly positive influences on the operational efficiency of com -panies; the degree of diversification and the per capita cost had significantly negative influences on the operational efficiency , but the asset-liability ratio had no significant influence on it . On the basis of the above results , the author gave several related suggestions to the agricultural listed companies involving in land circulation for reference.%基于 DEA-Tobit 两阶段模型对10家土地流转概念板块农业上市企业2010~2013年的经营效率及其影响因素进行分析。研究发现:企业总体平均综合效率值较高,但企业间效率存在差异;企业主营业务增长率和总资产周转率对企业的效率呈显著正向影响,多元化程度和人均成本对企业的效率呈显著负向影响,资产负债率对企业的经营效率影响不显著,最后,提出了相关建议,为参与土地流转的农业上市企业提供参考。

  6. The Parallelism of Traditional Transaction Model%传统事务模型的并行性

    Institute of Scientific and Technical Information of China (English)

    张志强; 李建中; 周立柱

    2001-01-01

    Transaction is a very important concept in DBMS,which has several features such as consistency,atomicity, durability and isolation. In this paper, we first analyze the parallelism of traditional transaction model. Next we point out that we can investigate more parallelism with a high parallel processing manner underlying multi-processors parallel structures. We will then compare the influence of two different software architectures on database system parallelism.

  7. Double-layer parallelization for hydrological model calibration on HPC systems

    Science.gov (United States)

    Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia

    2016-04-01

    Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.

  8. Generating the option of a two-stage nuclear renaissance.

    Science.gov (United States)

    Grimes, Robin W; Nuttall, William J

    2010-08-13

    Concerns about climate change, security of supply, and depleting fossil fuel reserves have spurred a revival of interest in nuclear power generation in Europe and North America, while other regions continue or initiate an expansion. We suggest that the first stage of this process will include replacing or extending the life of existing nuclear power plants, with continued incremental improvements in efficiency and reliability. After 2030, a large-scale second period of construction would allow nuclear energy to contribute substantially to the decarbonization of electricity generation. For nuclear energy to be sustainable, new large-scale fuel cycles will be required that may include fuel reprocessing. Here, we explore the opportunities and constraints in both time periods and suggests ways in which measures taken today might, at modest cost, provide more options in the decades to come. Careful long-term planning, along with parallel efforts aimed at containing waste products and avoiding diversion of material into weapons production, can ensure that nuclear power generation remains a carbon-neutral option.

  9. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  10. An integrated two-stage support vector machine approach to forecast inundation maps during typhoons

    Science.gov (United States)

    Jhong, Bing-Chen; Wang, Jhih-Huang; Lin, Gwo-Fong

    2017-04-01

    During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

  11. Dynamic Modeling and Analysis of Power Coupling System with Two-stage Planetary Gear Trains for Hybrid System%混合动力两级行星机构动力耦合系统动力学建模及分析

    Institute of Scientific and Technical Information of China (English)

    罗玉涛; 陈营生

    2012-01-01

    以基于双转子电机的混合动力传动系统的两级行星齿轮机构动力耦合系统为研究对象,考虑前后两级行星齿轮机构的齿轮副啮合刚度、中心构件的扭转支撑刚度、连接部分的扭转耦合刚度、各构件惯性等基本因素,详细推导并建立两级行星齿轮耦合系统的纯扭转动力学模型.利用两级行星齿轮机构的有关参数进行特征值问题求解,得到系统整体模型的固有特性,按照振型特点把系统的振动形式划分为三种模式:整体扭转振动模式、前排行星轮振动模式和后排行星轮振动模式.在整体模式下固有频率为单根,系统各构件均以一定幅度做扭转振动;前、后排行星轮模式下固有频率均为二重根,且除了其自身外,其他构件均无振动.归纳分析得到的各振动模式特征与前人有关结论相吻合.同时指出连接部分的耦合刚度对系统振动特性的影响,并作了初步分析.%A power coupling system with two-stage planetary gear trains is considered, which is a part of a novel hybrid power train system based on a double-rotor motor. Many fundamental factors are taken into account, such as mesh stiffness of gear pairs of the two-stage planetary gears, torsional stiffness of the central parts, torsional coupling stiffness of the connecting section, inertia of the system, etc. Purely torsional dynamic model of the coupling system is developed. The reduced-order eigenvalue problems are derived by using the related parameters of the system, and the natural characteristics of the system is shown. The vibration modes of the system are classified into three categories: overall rotational mode, front row planet mode, rear row planet mode. In overall mode, the natural frequency is a simple root and each part of the system has torsional vibration to some extent. However, the natural frequency is a double root in the other two modes and no part of system vibrates except the planet gears

  12. Design and Performance Analysis of a Massively Parallel Atmospheric General Circulation Model

    Directory of Open Access Journals (Sweden)

    Daniel S. Schaffer

    2000-01-01

    Full Text Available In the 1990's, computer manufacturers are increasingly turning to the development of parallel processor machines to meet the high performance needs of their customers. Simultaneously, atmospheric scientists studying weather and climate phenomena ranging from hurricanes to El Niño to global warming require increasingly fine resolution models. Here, implementation of a parallel atmospheric general circulation model (GCM which exploits the power of massively parallel machines is described. Using the horizontal data domain decomposition methodology, this FORTRAN 90 model is able to integrate a 0.6° longitude by 0.5° latitude problem at a rate of 19 Gigaflops on 512 processors of a Cray T3E 600; corresponding to 280 seconds of wall-clock time per simulated model day. At this resolution, the model has 64 times as many degrees of freedom and performs 400 times as many floating point operations per simulated day as the model it replaces.

  13. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  14. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  15. Application of Parallel Algorithms in an Air Pollution Model

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  16. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  17. Modeling parallelization and flexibility improvements in skill acquisition : From dual tasks to complex dynamic skills

    NARCIS (Netherlands)

    Taatgen, N

    2005-01-01

    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradua

  18. Parallelized Genetic Identification of the Thermal-Electrochemical Model for Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Liqiang Zhang

    2013-01-01

    Full Text Available The parameters of a well predicted model can be used as health characteristics for Lithium-ion battery. This article reports a parallelized parameter identification of the thermal-electrochemical model, which significantly reduces the time consumption of parameter identification. Since the P2D model has the most predictability, it is chosen for further research and expanded to the thermal-electrochemical model by coupling thermal effect and temperature-dependent parameters. Then Genetic Algorithm is used for parameter identification, but it takes too much time because of the long time simulation of model. For this reason, a computer cluster is built by surplus computing resource in our laboratory based on Parallel Computing Toolbox and Distributed Computing Server in MATLAB. The performance of two parallelized methods, namely Single Program Multiple Data (SPMD and parallel FOR loop (PARFOR, is investigated and then the parallelized GA identification is proposed. With this method, model simulations running parallelly and the parameter identification could be speeded up more than a dozen times, and the identification result is batter than that from serial GA. This conclusion is validated by model parameter identification of a real LiFePO4 battery.

  19. Forward and backward models for fault diagnosis based on parallel genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yi LIU; Ying LI; Yi-jia CAO; Chuang-xin GUO

    2008-01-01

    In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.

  20. A cause and effect two-stage BSC-DEA method for measuring the relative efficiency of organizations

    Directory of Open Access Journals (Sweden)

    Seyed Esmaeel Najafi

    2011-01-01

    Full Text Available This paper presents an integration of balanced score card (BSE with two-stage data envelopment analysis (DEA. The proposed model of this paper uses different financial and non-financial perspectives to evaluate the performance of decision making units in different BSC stages. At each stage, a two-stage DEA method is implemented to measure the relative efficiency of decision making units and the results are monitored using the cause and effect relationships. An empirical study for a banking sector is also performed using the method developed in this paper and the results are briefly analyzed.

  1. Block and parallel modelling of broad domain nonlinear continuous mapping based on NN

    Institute of Scientific and Technical Information of China (English)

    Yang Guowei; Tu Xuyan; Wang Shoujue

    2006-01-01

    The necessity of the use of the block and parallel modeling of the nonlinear continuous mappings with NN is firstly expounded quantitatively. Then, a practical approach for the block and parallel modeling of the nonlinear continuous mappings with NN is proposed. Finally, an example indicating that the method raised in this paper can be realized by suitable existed software is given. The results of the experiment of the model discussed on the 3-D Mexican straw hat indicate that the block and parallel modeling based on NN is more precise and faster in computation than the direct ones and it is obviously a concrete example and the development of the large-scale general model established by Tu Xuyan.

  2. Complex Dynamical Behavior of a Two-Stage Colpitts Oscillator with Magnetically Coupled Inductors

    Directory of Open Access Journals (Sweden)

    V. Kamdoum Tamba

    2014-01-01

    Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

  3. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  4. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  5. Configuration Consideration for Expander in Transcritical Carbon Dioxide Two-Stage Compression Cycle

    Institute of Scientific and Technical Information of China (English)

    MA Yitai; YANG Junlan; GUAN Haiqing; LI Minxia

    2005-01-01

    To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

  6. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  7. Modeling the Fracture of Ice Sheets on Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Waisman, Haim [Columbia University; Tuminaro, Ray [Sandia National Labs

    2013-10-10

    The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

  8. Parallelization and Performance of the NIM Weather Model Running on GPUs

    Science.gov (United States)

    Govett, Mark; Middlecoff, Jacques; Henderson, Tom; Rosinski, James

    2014-05-01

    The Non-hydrostatic Icosahedral Model (NIM) is a global weather prediction model being developed to run on the GPU and MIC fine-grain architectures. The model dynamics, written in Fortran, was initially parallelized for GPUs in 2009 using the F2C-ACC compiler and demonstrated good results running on a single GPU. Subsequent efforts have focused on (1) running efficiently on multiple GPUs, (2) parallelization of NIM for Intel-MIC using openMP, (3) assessing commercial Fortran GPU compilers now available from Cray, PGI and CAPS, (4) keeping the model up to date with the latest scientific development while maintaining a single source performance portable code, and (5) parallelization of two physics packages used in the NIM: the operational Global Forecast System (GFS) used operationally, and the widely used Weather Research and Forecast (WRF) model physics. The presentation will touch on each of these efforts, but highlight improvements in parallel performance of the NIM running on the Titan GPU cluster at ORNL, the ongong parallelization of model physics, and a recent evaluation of commercial GPU compilers using the F2C-ACC compiler as the baseline.

  9. 中国与主要创新经济体创新绩效的国际比较--两阶段创新模型的实证分析%International Comparison of Innovation Performance Between China and Other Innovation Economies:An Analysis Based on Two-Stage Innovation Model

    Institute of Scientific and Technical Information of China (English)

    崔维军; 陈凤; 罗玉

    2014-01-01

    Based on two-stage innovation model, this paper analyzed innovation performance of 58 countries and compared performance between China and other innovation economies, include America, Japan, European Union and other BRICS countries. Study showed that: (1)There were obvious gap between China and America, Japan and European Union, obvious difference between China and other three BRICS countries in terms of innovation in-put and output; (2)Innovation performance of China had advantages in the first stage of innovation model, stron-ger than America, Japan, European Union and other three BRICS countries; (3)China's innovation performance was well below America, Japan and European Union, and were significantly lower than Russian and Brazil. This research is meaningful to understand the current status of innovation-oriented country construction of China.%通过构建两阶段创新模型,运用DEA分析方法从研究开发和商业化过程两个阶段分析了58个国家的创新绩效,比较分析了中国与美国、日本和欧盟等主要经济体的差距,分析了中国与俄罗斯、南非和巴西三个金砖国家的差异。研究结果表明:(1)中国在创新投入与创新产出方面与美国、欧盟和日本有明显差距,与俄罗斯、南非和巴西三个金砖国家也存在差异;(2)中国在第一阶段的创新绩效优势明显,强于美国、日本、欧盟以及其他三个金砖国家;(3)中国在第二阶段的创新绩效大大低于美国、日本和欧盟,也明显低于俄罗斯和巴西。

  10. Study on technological efficiency of listed companies in coal industry based on two-stage DEA model%基于两阶段 DEA 模型的煤炭行业上市公司科技效率研究∗

    Institute of Scientific and Technical Information of China (English)

    颜伟

    2015-01-01

    将煤炭行业的科技创新过程分解为科技研发和科技转化两个子过程,构建链式两阶段数据包络分析(DEA)模型,针对每个子过程设置具体的投入产出观测指标,收集煤炭行业上市公司2013年数据,计算其整体科技创新效率及分阶段的科技研发效率和科技转化效率,并与经典的数据包络分析 CCR 模型结果进行对比分析。结果表明,尽管科技转化效率较高,但是由于低效的科技研发投入使得科技资源运用仍然处于无效状态,因此,煤炭行业上市公司的整体科技效率比较低。%The process of technology innovation is decomposed into two sub-processes,in-cluding process of technology research & development (R&D)and process of technology trans-formation,based on the existing research results,in order to build two-stage tandem data enve-lope analysis model.Then the specific inputs and outputs are set up corresponding to each stage to calculate efficiency of the whole system and the two sub-processes independently including effi-ciency of technology R&D as well as efficiency of technology transformation,after collecting the datum from annual reports of listed companies.The results are also made a comparative analysis with those from the classical data envelope analysis CCR model,which shows that although tech-nology transformation efficiencies of listed companiesare generally high,consumption of technol-ogy resources are still in invalid state due to inefficient technology R&D inputs.As a result,the overall technology efficiency of coal industry is still low.

  11. Parallel direct solver for finite element modeling of manufacturing processes

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, P.A.F.

    2017-01-01

    The central processing unit (CPU) time is of paramount importance in finite element modeling of manufacturing processes. Because the most significant part of the CPU time is consumed in solving the main system of equations resulting from finite element assemblies, different approaches have been...

  12. THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yao Yu

    2014-01-01

    Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.

  13. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    Science.gov (United States)

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  14. Advancing the extended parallel process model through the inclusion of response cost measures.

    Science.gov (United States)

    Rintamaki, Lance S; Yang, Z Janet

    2014-01-01

    This study advances the Extended Parallel Process Model through the inclusion of response cost measures, which are drawbacks associated with a proposed response to a health threat. A sample of 502 college students completed a questionnaire on perceptions regarding sexually transmitted infections and condom use after reading information from the Centers for Disease Control and Prevention on the health risks of sexually transmitted infections and the utility of latex condoms in preventing sexually transmitted infection transmission. The questionnaire included standard Extended Parallel Process Model assessments of perceived threat and efficacy, as well as questions pertaining to response costs associated with condom use. Results from hierarchical ordinary least squares regression demonstrated how the addition of response cost measures improved the predictive power of the Extended Parallel Process Model, supporting the inclusion of this variable in the model.

  15. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  16. Modelling and analysis of fringing and metal thickness effects in MEMS parallel plate capacitors

    Science.gov (United States)

    Shah, Kriyang; Singh, Jugdutt; Zayegh, Aladin

    2005-12-01

    This paper presents a detailed design and analysis of fringing and metal thickness effects in a Micro Electro Mechanical System (MEMS) parallel plate capacitor. MEMS capacitor is one of the widely deployed components into various applications such are pressure sensor, accelerometers, Voltage Controlled Oscillator's (VCO's) and other tuning circuits. The advantages of MEMS capacitor are miniaturisation, integration with optics, low power consumption and high quality factor for RF circuits. Parallel plate capacitor models found in literature are discussed and the best suitable model for MEMS capacitors is presented. From the equations presented it is found that fringing filed and metal thickness have logarithmic effects on capacitance and depend on width of parallel plates, distance between them and thickness of metal plates. From this analysis a precise model of a MEMS parallel plate capacitor is developed which incorporates the effects of fringing fields and metal thickness. A parallel plate MEMS capacitor has been implemented using Coventor design suite. Finite Element Method (FEM) analysis in Coventorware design suite has been performed to verify the accuracy of the proposed model for suitable range of dimensions for MEMS capacitor Simulations and analysis show that the error between the designed and the simulated values of MEMS capacitor is significantly reduced. Application of the modified model for computing capacitance of a combed device shows that the designed values greatly differ from simulated results noticeably from 1.0339pF to 1.3171pF in case of fringed devices.

  17. Parallel computer processing and modeling: applications for the ICU

    Science.gov (United States)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  18. A numerical model for thermoelectric generator with the parallel-plate heat exchanger

    Science.gov (United States)

    Yu, Jianlin; Zhao, Hua

    This paper presents a numerical model to predict the performance of thermoelectric generator with the parallel-plate heat exchanger. The model is based on an elemental approach and exhibits its feature in analyzing the temperature change in a thermoelectric generator and concomitantly its performance under operation conditions. The numerical simulated examples are demonstrated for the thermoelectric generator of parallel flow type and counter flow type in this paper. Simulation results show that the variations in temperature of the fluids in the thermoelectric generator are linear. The numerical model developed in this paper may be also applied to further optimization study for thermoelectric generator.

  19. Grid Service Framework:Supporting Multi-Models Parallel Grid Programming

    Institute of Scientific and Technical Information of China (English)

    邓倩妮; 陆鑫达

    2004-01-01

    Web service is a grid computing technology that promises greater ease-of-use and interoperability than previous distributed computing technologies. This paper proposed Group Service Framework, a grid computing platform based on Microsoft. NET that use web service to: (1) locate and harness volunteer computing resources for different applications, and (2) support multi-models such as Master/Slave, Divide and Conquer, Phase Parallel and so forth parallel programming paradigms in Grid environment, (3) allocate data and balance load dynamically and transparently for grid computing application. The Grid Service Framework based on Microsoft. NET was used to implement several simple parallel computing applications. The results show that the proposed Group Service Framework is suitable for generic parallel numerical computing.

  20. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  1. Numerical simulation of municipal solid waste combustion in a novel two-stage reciprocating incinerator.

    Science.gov (United States)

    Huai, X L; Xu, W L; Qu, Z Y; Li, Z G; Zhang, F P; Xiang, G M; Zhu, S Y; Chen, G

    2008-01-01

    A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

  2. Rules and mechanisms for efficient two-stage learning in neural circuits

    Science.gov (United States)

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  3. Two-stage numerical simulation for temperature profile in furnace of tangentially fired pulverized coal boiler

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nai-jun; XU Qiong-hui; ZHOU Ping

    2005-01-01

    Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

  4. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    Science.gov (United States)

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

  5. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  6. 耐用品垄断厂商经营策略的选择--一个两阶段博弈模型的拓展%Durable Goods Monopolist’s Business Strategy Choice:Development of A Two-Stage Game Model

    Institute of Scientific and Technical Information of China (English)

    王志刚; 钱成济; 杨胤轩

    2014-01-01

    By using a two-stage game model,this paper analyses durable goods monopolist’s business strategy choice,and makes a reasonable explanation on the realistic question.It turns out that compared with sale,monopolist prefers to lease.But in reality,there are three factors against leasing.The first one is moral hazard,which makes monopolist bear the additional costs when leasing durable goods,and greatly reduces the enthusiasm of leasing.The second one is potential entrant that would lead to the choice to sell durable goods for monopolist,in order to capture the market share and to prevent potential competitors entering the market.The last one is credible commitment,if monopolist is able to make a credible commitment to resist cutting its prices,it can improve the profit of selling durable goods and accelerate cash flow.%通过运用传统的两阶段博弈模型对垄断厂商出租和出售耐用品进行对比分析可知,与出售相比,垄断厂商更加偏好出租。石磊-寇宗来模型也有一定不足,需着重分析加入道德风险、进入威胁和可置信承诺的条件后,耐用品垄断厂商出租和出售耐用品的最优选择问题。研究结果说明:首先,道德风险使垄断厂商在出租耐用品时承担额外的成本,在该成本大于临界道德风险成本时,垄断厂商将选择出售;反之亦然。其次,进入威胁会促使垄断厂商选择出售耐用品以占领部分市场,阻碍潜在竞争者进入。最后,垄断厂商做出不降价的可置信承诺能提高出售耐用品时的利润,加速资金流转。

  7. 网络市场商标权侵权问题研究--基于一个两期动态交易模型%Study on Trademark Right Infringement in Network Market---Based on a Two-stage Dynamic Trading Model

    Institute of Scientific and Technical Information of China (English)

    韩沈超; 吴雯

    2014-01-01

    构建一个两期动态交易模型,分析在销售与不销售商标权侵权商品的情形下,网络市场三方主体网商平台、消费者和卖家如何权衡效用最大化,并结合“1号店”案例深入研究网络市场商标权侵权问题。结论表明:当网商平台的监管力度、惩罚成本控制在一定范围内,有助于抑制侵权行为发生;商标的品牌价值越高,侵权发生的可能性也相应增加;卖家商业道德的提升对于侵权活动发生亦有抑制作用。据此,提出了如何构建和谐网络市场、开展诚信交易活动的政策建议。%A two-stage dynamic trading model is built in this paper, to analyze how the three sides of main bodies in network mar-ket environment -online business platforms, consumers and sellers maximize their utilities in two different scenarios ( whether to sell goods that infringes the trademark right or not). Besides, a case study concerned with“the 1st Shop” is also adopted to do some dee-per analyses about trademark right infringement in network market. The result reflects that:in network market, the online platform will contribute to restrain infringing activities by controlling regulations as well as penalty cost to a certain level;the higher the trademark of the brand value is, the more chances that infringing activities occur;the improvement of business morality also helps the decreasing of infringing activities. On the basis, the paper puts forward some policy suggestions which are concerned with how to build a harmonious network market and to do honest trading online.

  8. Modelling and simulation of multiple single - phase induction motor in parallel connection

    Directory of Open Access Journals (Sweden)

    Sujitjorn, S.

    2006-11-01

    Full Text Available A mathematical model for parallel connected n-multiple single-phase induction motors in generalized state-space form is proposed in this paper. The motor group draws electric power from one inverter. The model is developed by the dq-frame theory and was tested against four loading scenarios in which satisfactory results were obtained.

  9. A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers

    NARCIS (Netherlands)

    de Jong, Anne; Wijnant, Ysbrand H.; de Boer, Andries

    2014-01-01

    A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic

  10. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  11. Autothermal two-stage gasification of low-density waste-derived fuels

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, Stefan [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Hasselbach, Holger [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Weil, Steffen [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Krumm, Wolfgang [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany)]. E-mail: w.krumm@et.mb.uni-siegen.de

    2007-02-15

    In order to increase the efficiency of waste utilization in thermal conversion processes, pre-treatment is advantageous. With the Herhof Stabilat[reg] process, residual domestic waste is upgraded to waste-derived fuel by means of biological drying and mechanical separation of inerts and metals. The dried and homogenized waste-derived Stabilat[reg] fuel has a relatively high calorific value and contains high volatile matter which makes it suitable for gasification. As a result of extensive mechanical treatment, the Stabilat[reg] produced is of a fluffy appearance with a low density. A two-stage gasifier, based on a parallel-arranged bubbling fluidized bed and a fixed bed reactor, has been developed to convert Stabilat[reg] into hydrogen-rich product gas. This paper focuses on the design and construction of the configured laboratory-scale gasifier and experience with its operation. The processing of low-density fluffy waste-derived fuel using small-scale equipment demands special technical solutions for the core components as well as for the peripheral equipment. These are discussed here. The operating results of Stabilat[reg] gasification are also presented.

  12. The Effect of Effluent Recirculation in a Semi-Continuous Two-Stage Anaerobic Digestion System

    Directory of Open Access Journals (Sweden)

    Karthik Rajendran

    2013-06-01

    Full Text Available The effect of recirculation in increasing organic loading rate (OLR and decreasing hydraulic retention time (HRT in a semi-continuous two-stage anaerobic digestion system using stirred tank reactor (CSTR and an upflow anaerobic sludge bed (UASB was evaluated. Two-parallel processes were in operation for 100 days, one with recirculation (closed system and the other without recirculation (open system. For this purpose, two structurally different carbohydrate-based substrates were used; starch and cotton. The digestion of starch and cotton in the closed system resulted in production of 91% and 80% of the theoretical methane yield during the first 60 days. In contrast, in the open system the methane yield was decreased to 82% and 56% of the theoretical value, for starch and cotton, respectively. The OLR could successfully be increased to 4 gVS/L/day for cotton and 10 gVS/L/day for starch. It is concluded that the recirculation supports the microorganisms for effective hydrolysis of polyhydrocarbons in CSTR and to preserve the nutrients in the system at higher OLRs, thereby improving the overall performance and stability of the process.

  13. Parallel Machine Scheduling Models with Fuzzy Parameters and Precedence Constraints: A Credibility Approach

    Institute of Scientific and Technical Information of China (English)

    HOU Fu-jun; WU Qi-zong

    2007-01-01

    A method for modeling the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure is provided.For the given n jobs to be processed on m machines, it is assumed that the processing times and the due dates are nonnegative fuzzy numbers and all the weights are positive, crisp numbers.Based on credibility measure, three parallel machine scheduling problems and a goal-programming model are formulated.Feasible schedules are evaluated not only by their objective values but also by the credibility degree of satisfaction with their precedence constraints.The genetic algorithm is utilized to find the best solutions in a short period of time.An illustrative numerical example is also given.Simulation results show that the proposed models are effective, which can deal with the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure.

  14. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    Science.gov (United States)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  15. Parallelization of the TRIGRS model for rainfall-induced landslides using the message passing interface

    Science.gov (United States)

    Alvioli, M.; Baum, R.L.

    2016-01-01

    We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.

  16. Error modelling and experimental validation of a planar 3-PPR parallel manipulator with joint clearances

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    This paper deals with the error modelling and analysis of a 3-PPR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Usi...... this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements and show the effectiveness of the error prediction model....

  17. Two Stage Battery System for the ROSETTA Lander

    Science.gov (United States)

    Debus, André

    2002-01-01

    The ROSETTA mission, lead by ESA, will be launched by Ariane V from Kourou in January 2003 and after a long trip, the spacecraft will reach the comet Wirtanen 46P in 2011. The mission includes a lander, built under the leadership of DLR, on which CNES has a large participation and is concerned by providing a part of the payload and some lander systems. Among these, CNES delivers a specific battery system in order to comply with the mission environment and the mission scenario, avoiding particularly the use of radio-isotopic heaters and radio-isotopic electrical generators usually used for such missions far from the Sun. The battery system includes : - a pack of primary batteries of lithium/thionyl chloride cells, this kind of generator - a secondary stage, including rechargeable lithium-ion cells, used as redundancy for the - a specific electronic system dedicated to the battery handling and to secondary battery - a mechanical and thermal (insulation, and heating devices) structures permitting the The complete battery system has been designed, built and qualified in order to comply with the trip and mission requirements, keeping within low mass and low volume limits. This battery system is presently integrated into the Rosetta Lander flight model and will leave the Earth at the beginning of next year. Such a development and experience could be re-used in the frame of cometary and planetary missions.

  18. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  19. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  20. Co-simulation of dynamic systems in parallel and serial model configurations

    Energy Technology Data Exchange (ETDEWEB)

    Sweafford, Trevor [General Motors, Milford (United States); Yoon, Hwan Sik [The University of Alabama, Tuscaloosa (United States)

    2013-12-15

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  1. Parallel plate model for trabecular bone exhibits volume fraction-dependant bias

    DEFF Research Database (Denmark)

    Day, J; Ding, Ming; Odgaard, A;

    2000-01-01

    Unbiased stereological methods were used in conjunction with microcomputed tomographic (micro-CT) scans of human and animal bone to investigate errors created when the parallel plate model was used to calculate morphometric parameters. Bone samples were obtained from the human proximal tibia......, canine distal femur, rat tail, and pig spine and scanned in a micro-CT scanner. Trabecular thickness, trabecular spacing, and trabecular number were calculated using the parallel plate model. Direct thickness, and spacing and connectivity density were calculated using unbiased three-dimensional methods...

  2. Parallel Path Magnet Motor: Development of the Theoretical Model and Analysis of Experimental Results

    Science.gov (United States)

    Dirba, I.; Kleperis, J.

    2011-01-01

    Analytical and numerical modelling is performed for the linear actuator of a parallel path magnet motor. In the model based on finite-element analysis, the 3D problem is reduced to a 2D problem, which is sufficiently precise in a design aspect and allows modelling the principle of a parallel path motor. The paper also describes a relevant numerical model and gives comparison with experimental results. The numerical model includes all geometrical and physical characteristics of the motor components. The magnetic flux density and magnetic force are simulated using FEMM 4.2 software. An experimental model has also been developed and verified for the core of switchable magnetic flux linear actuator and motor. The results of experiments are compared with those of theoretical/analytical and numerical modelling.

  3. Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ.

    Science.gov (United States)

    Burke, Danielle L; Ensor, Joie; Riley, Richard D

    2017-02-28

    Meta-analysis using individual participant data (IPD) obtains and synthesises the raw, participant-level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta-analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual-level interactions, such as treatment-effect modifiers. There are two statistical approaches for conducting an IPD meta-analysis: one-stage and two-stage. The one-stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two-stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta-analysis model. There have been numerous comparisons of the one-stage and two-stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one-stage and two-stage IPD meta-analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one-stage or two-stage itself. We illustrate the concepts with recently published IPD meta-analyses, summarise key statistical software and provide recommendations for future IPD meta-analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  5. From Cells to Islands: An unified Model of Cellular Parallel Genetic Algorithms

    CERN Document Server

    Simoncini, David; Verel, Sébastien; Clergue, Manuel

    2008-01-01

    This paper presents the Anisotropic selection scheme for cellular Genetic Algorithms (cGA). This new scheme allows to enhance diversity and to control the selective pressure which are two important issues in Genetic Algorithms, especially when trying to solve difficult optimization problems. Varying the anisotropic degree of selection allows swapping from a cellular to an island model of parallel genetic algorithm. Measures of performances and diversity have been performed on one well-known problem: the Quadratic Assignment Problem which is known to be difficult to optimize. Experiences show that, tuning the anisotropic degree, we can find the accurate trade-off between cGA and island models to optimize performances of parallel evolutionary algorithms. This trade-off can be interpreted as the suitable degree of migration among subpopulations in a parallel Genetic Algorithm.

  6. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  7. BSIRT: a block-iterative SIRT parallel algorithm using curvilinear projection model.

    Science.gov (United States)

    Zhang, Fa; Zhang, Jingrong; Lawrence, Albert; Ren, Fei; Wang, Xuan; Liu, Zhiyong; Wan, Xiaohua

    2015-03-01

    Large-field high-resolution electron tomography enables visualizing detailed mechanisms under global structure. As field enlarges, the distortions of reconstruction and processing time become more critical. Using the curvilinear projection model can improve the quality of large-field ET reconstruction, but its computational complexity further exacerbates the processing time. Moreover, there is no parallel strategy on GPU for iterative reconstruction method with curvilinear projection. Here we propose a new Block-iterative SIRT parallel algorithm with the curvilinear projection model (BSIRT) for large-field ET reconstruction, to improve the quality of reconstruction and accelerate the reconstruction process. We also develop some key techniques, including block-iterative method with the curvilinear projection, a scope-based data decomposition method and a page-based data transfer scheme to implement the parallelization of BSIRT on GPU platform. Experimental results show that BSIRT can improve the reconstruction quality as well as the speed of the reconstruction process.

  8. BSP模型下的并行程序设计与开发%Design and Development of Parallel Programs on Bulk Synchronous Parallel Model

    Institute of Scientific and Technical Information of China (English)

    赖树华; 陆朝俊; 孙永强

    2001-01-01

    The Bulk Synchronous Parallel (BSP) model was simply introduced, and the advantage of the parapllel program's design and development on BSP model was discussed. Then it analysed how to design and develop the parallel programs on BSP model and summarized several principles the developer must comply with. At last a useful parallel programming method based on the BSP model was presented: the two phase method of BSP parallel program design. An example was given to illustrate how to make use of the above method and the BSP performance prediction tool.%介绍了BSP(Bulk Synchronous Parallel)模型,讨论了在该模型下进行并行程序设计的优点、并行算法的分析和设计方法及其必须遵守的原则.以两矩阵的乘法为例说明了如何借助BSP并行程序性能预测工具,利用两阶段BSP并行程序设计方法进行BSP并行程序的设计和开发.

  9. A global parallel model based design of experiments method to minimize model output uncertainty.

    Science.gov (United States)

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  10. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  11. Error Modelling and Experimental Validation for a Planar 3-PPR Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2011-01-01

    In this paper, the positioning error of a 3-PPR planar parallel manipulator is studied with an error model and experimental validation. First, the displacement and workspace are analyzed. An error model considering both configuration errors and joint clearance errors is established. Using...... this model, the maximum positioning error was estimated for a U-shape PPR planar manipulator, the results being compared with the experimental measurements. It is found that the error distributions from the simulation is approximate to that of themeasurements....

  12. Error Modelling and Experimental Validation of a Planar 3-PPR Parallel Manipulator with Joint Clearances

    OpenAIRE

    Wu, Guanglei; Shaoping, Bai; Jørgen A., Kepler; Caro, Stéphane

    2012-01-01

    International audience; This paper deals with the error modelling and analysis of a 3-\\underline{P}PR planar parallel manipulator with joint clearances. The kinematics and the Cartesian workspace of the manipulator are analyzed. An error model is established with considerations of both configuration errors and joint clearances. Using this model, the upper bounds and distributions of the pose errors for this manipulator are established. The results are compared with experimental measurements a...

  13. Design and Implementation of “Many Parallel Task” Hybrid Subsurface Model

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Khushbu; Chase, Jared M.; Schuchardt, Karen L.; Scheibe, Timothy D.; Palmer, Bruce J.; Elsethagen, Todd O.

    2011-11-01

    Continuum scale models have been used to study subsurface flow, transport, and reactions for many years. Recently, pore scale models, which operate at scales of individual soil grains, have been developed to more accurately model pore scale phenomena, such as precipitation, that may not be well represented at the continuum scale. However, particle-based models become prohibitively expensive for modeling realistic domains. Instead, we are developing a hybrid model that simulates the full domain at continuum scale and applies the pore model only to areas of high reactivity. The hybrid model uses a dimension reduction approach to formulate the mathematical exchange of information across scales. Since the location, size, and number of pore regions in the model varies, an adaptive Pore Generator is being implemented to define pore regions at each iteration. A fourth code will provide data transformation from the pore scale back to the continuum scale. These components are coupled into a single hybrid model using the SWIFT workflow system. Our hybrid model workflow simulates a kinetic controlled mixing reaction in which multiple pore-scale simulations occur for every continuum scale timestep. Each pore-scale simulation is itself parallel, thus exhibiting multi-level parallelism. Our workflow manages these multiple parallel tasks simultaneously, with the number of tasks changing across iterations. It also supports dynamic allocation of job resources and visualization processing at each iteration. We discuss the design, implementation and challenges associated with building a scalable, Many Parallel Task, hybrid model to run efficiently on thousands to tens of thousands of processors.

  14. MPI Parallel Algorithm in Satellite Gravity Field Model Inversion on the Basis of Least Square Method

    Directory of Open Access Journals (Sweden)

    ZHOU Hao

    2015-08-01

    Full Text Available In order to solve the intensive computing tasks and high memory demand problem in satellite gravity field model inversion on the basis of huge amounts of satellite gravity observations, the parallel algorithm for high truncated order and degree satellite gravity field model inversion with least square method on the basis of MPI was introduced. After analyzing the time and space complexity of each step in the solving flow, the parallel I/O, block-organized storage and block-organized computation algorithm on the basis of MPI are introduced to design the parallel algorithm for building design matrix, establishing and solving normal equation, and the simulation results indicate that the parallel efficiency of building design matrix, establishing and solving normal equation can reach to 95%, 68%and 63% respectively. In addition, on the basis of GOCE simulated orbits and radial disturbance gravity gradient data(518 400 epochs in total, two earth gravity models truncated to degree and order 120, 240 are inversed, and the relative computation time and memory demand are only about 40 minutes and 7 hours, 290 MB and 1.57 GB respectively. Eventually, a simulation numerical calculation for earth gravity field model inversion with the simulation data, which has the equivalent noise level with GRACE and GOCE mission, is conducted. The accuracy of inversion model has a good consistent with current released model, and the combined mode can complement the spectral information of each individual mission, which indicates that the parallel algorithm in this paper can be applied to inverse the high truncated degree and order earth gravity model efficiently and stably.

  15. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  16. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  17. Fast Image Segmentation Based on a Two-Stage Geometrical Active Contour

    Institute of Scientific and Technical Information of China (English)

    肖昌炎; 张素; 陈亚珠

    2005-01-01

    A fast two-stage geometric active contour algorithm for image segmentation is developed. First, the Eikonal equation problem is quickly solved using an improved fast sweeping method, and a criterion of local minimum of area gradient (LMAG) is presented to extract the optimal arrival time. Then, the final time function is passed as an initial state to an area and length minimizing flow model, which adjusts the interface more accurately and prevents it from leaking. For object with complete and salient edge, using the first stage only is able to obtain an ideal result, and this results in a time complexity of O(M), where M is the number of points in each coordinate direction. Both stages are needed for convoluted shapes, but the computation cost can be drastically reduced. Efficiency of the algorithm is verified in segmentation experiments of real images with different feature.

  18. Evaluation of a Two-Stage Approach in Trans-Ethnic Meta-Analysis in Genome-Wide Association Studies.

    Science.gov (United States)

    Hong, Jaeyoung; Lunetta, Kathryn L; Cupples, L Adrienne; Dupuis, Josée; Liu, Ching-Ti

    2016-05-01

    Meta-analysis of genome-wide association studies (GWAS) has achieved great success in detecting loci underlying human diseases. Incorporating GWAS results from diverse ethnic populations for meta-analysis, however, remains challenging because of the possible heterogeneity across studies. Conventional fixed-effects (FE) or random-effects (RE) methods may not be most suitable to aggregate multiethnic GWAS results because of violation of the homogeneous effect assumption across studies (FE) or low power to detect signals (RE). Three recently proposed methods, modified RE (RE-HE) model, binary-effects (BE) model and a Bayesian approach (Meta-analysis of Transethnic Association [MANTRA]), show increased power over FE and RE methods while incorporating heterogeneity of effects when meta-analyzing trans-ethnic GWAS results. We propose a two-stage approach to account for heterogeneity in trans-ethnic meta-analysis in which we clustered studies with cohort-specific ancestry information prior to meta-analysis. We compare this to a no-prior-clustering (crude) approach, evaluating type I error and power of these two strategies, in an extensive simulation study to investigate whether the two-stage approach offers any improvements over the crude approach. We find that the two-stage approach and the crude approach for all five methods (FE, RE, RE-HE, BE, MANTRA) provide well-controlled type I error. However, the two-stage approach shows increased power for BE and RE-HE, and similar power for MANTRA and FE compared to their corresponding crude approach, especially when there is heterogeneity across the multiethnic GWAS results. These results suggest that prior clustering in the two-stage approach can be an effective and efficient intermediate step in meta-analysis to account for the multiethnic heterogeneity.

  19. On the adequation of dynamic modelling and control of parallel kinematic manipulators.

    OpenAIRE

    Ozgür, Erol; Andreff, Nicolas; Martinet, Philippe

    2010-01-01

    International audience; This paper addresses the problem of controlling the dynamics of parallel kinematic manipulators from a global point of view, where modeling, sensing and control are considered simultaneously. The methodology is presented through the examples of the Gough-Stewart manipulator and the Quattro robot.

  20. All-pairs Shortest Path Algorithm based on MPI+CUDA Distributed Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Qingshuang Wu

    2013-12-01

    Full Text Available In view of the problem that computing shortest paths in a graph is a complex and time-consuming process, and the traditional algorithm that rely on the CPU as computing unit solely can't meet the demand of real-time processing, in this paper, we present an all-pairs shortest paths algorithm using MPI+CUDA hybrid programming model, which can take use of the overwhelming computing power of the GPU cluster to speed up the processing. This proposed algorithm can combine the advantages of MPI and CUDA programming model, and can realize two-level parallel computing. In the cluster-level, we take use of the MPI programming model to achieve a coarse-grained parallel computing between the computational nodes of the GPU cluster. In the node-level, we take use of the CUDA programming model to achieve a GPU-accelerated fine grit parallel computing in each computational node internal. The experimental results show that the MPI+CUDA-based parallel algorithm can take full advantage of the powerful computing capability of the GPU cluster, and can achieve about hundreds of time speedup; The whole algorithm has good computing performance, reliability and scalability, and it is able to meet the demand of real-time processing of massive spatial shortest path analysis

  1. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  2. Parallel plate model for trabecular bone exhibits volume fraction-dependent bias

    NARCIS (Netherlands)

    J.S. Day (Judd); M. Ding; A. Odgaard; D.R. Sumner (Dale); I. Hvid (Ivan); H.H. Weinans (Harrie)

    2000-01-01

    textabstractUnbiased stereological methods were used in conjunction with microcomputed tomographic (micro-CT) scans of human and animal bone to investigate errors created when the parallel plate model was used to calculate morphometric parameters. Bone samples were obtained from the human proximal t

  3. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  4. Sparse Probabilistic Parallel Factor Analysis for the Modeling of PET and Task-fMRI Data

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Papoutsakis, Georgios; Hinrich, Jesper Løve

    2017-01-01

    interpretability of the results. Here we propose a variational Bayesian parallel factor analysis (VB-PARAFAC) model and an extension with sparse priors (SP-PARAFAC). Notably, our formulation admits time and subject specific noise modeling as well as subject specific offsets (i.e., mean values). We confirmed...... the validity of the models through simulation and performed exploratory analysis of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) data. Although more constrained, the proposed models performed similarly to more flexible models in approximating the PET data, which supports......Modern datasets are often multiway in nature and can contain patterns common to a mode of the data (e.g. space, time, and subjects). Multiway decomposition such as parallel factor analysis (PARAFAC) take into account the intrinsic structure of the data, and sparse versions of these methods improve...

  5. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  6. Preemptive scheduling in a two-stage supply chain to minimize the makespan

    NARCIS (Netherlands)

    Pei, Jun; Fan, Wenjuan; Pardalos, Panos M.; Liu, Xinbao; Goldengorin, Boris; Yang, Shanlin

    2015-01-01

    This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

  7. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    OpenAIRE

    2011-01-01

    The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP) model and we study a rapid parallelization strategy of the numerical procedure on shared...

  8. Modeling of Electromagnetic Fields in Parallel-Plane Structures: A Unified Contour-Integral Approach

    Directory of Open Access Journals (Sweden)

    M. Stumpf

    2017-04-01

    Full Text Available A unified reciprocity-based modeling approach for analyzing electromagnetic fields in dispersive parallel-plane structures of arbitrary shape is described. It is shown that the use of the reciprocity theorem of the time-convolution type leads to a global contour-integral interaction quantity from which novel both time- and frequency-domain numerical schemes can be arrived at. Applications of the numerical method concerning the time-domain radiated interference and susceptibility of parallel-plane structures are discussed and illustrated on numerical examples.

  9. Model of stacked long Josephson junctions: Parallel algorithm and numerical results in case of weak coupling

    Science.gov (United States)

    Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.

    2016-10-01

    We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.

  10. Developing a Massively Parallel Forward Projection Radiography Model for Large-Scale Industrial Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bauerle, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This project utilizes Graphics Processing Units (GPUs) to compute radiograph simulations for arbitrary objects. The generation of radiographs, also known as the forward projection imaging model, is computationally intensive and not widely utilized. The goal of this research is to develop a massively parallel algorithm that can compute forward projections for objects with a trillion voxels (3D pixels). To achieve this end, the data are divided into blocks that can each t into GPU memory. The forward projected image is also divided into segments to allow for future parallelization and to avoid needless computations.

  11. A PARALLEL NUMERICAL MODEL OF SOLVING N-S EQUATIONS BY USING SEQUENTIAL REGULARIZATION METHOD

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A parallel numerical model was established for solving the Navier-Stokes equations by using Sequential Regularization Method (SRM). The computational domain is decomposed into P sub-domains in which the difference formulae were obtained from the governing equations. The data were exchannged at the virtual boundary of sub-domains in parallel computation. The close-channel cavity flow was solved by the implicit method. The driven square cavity flow was solved by the explicit method. The results were compared well those given by Ghia.

  12. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    Directory of Open Access Journals (Sweden)

    Oscar Salas

    2011-07-01

    Full Text Available The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP model and we study a rapid parallelization strategy of the numerical procedure on shared memory architectures.

  13. Rasterizing geological models for parallel finite difference simulation using seismic simulation as an example

    Science.gov (United States)

    Zehner, Björn; Hellwig, Olaf; Linke, Maik; Görz, Ines; Buske, Stefan

    2016-01-01

    3D geological underground models are often presented by vector data, such as triangulated networks representing boundaries of geological bodies and geological structures. Since models are to be used for numerical simulations based on the finite difference method, they have to be converted into a representation discretizing the full volume of the model into hexahedral cells. Often the simulations require a high grid resolution and are done using parallel computing. The storage of such a high-resolution raster model would require a large amount of storage space and it is difficult to create such a model using the standard geomodelling packages. Since the raster representation is only required for the calculation, but not for the geometry description, we present an algorithm and concept for rasterizing geological models on the fly for the use in finite difference codes that are parallelized by domain decomposition. As a proof of concept we implemented a rasterizer library and integrated it into seismic simulation software that is run as parallel code on a UNIX cluster using the Message Passing Interface. We can thus run the simulation with realistic and complicated surface-based geological models that are created using 3D geomodelling software, instead of using a simplified representation of the geological subsurface using mathematical functions or geometric primitives. We tested this set-up using an example model that we provide along with the implemented library.

  14. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    Science.gov (United States)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  15. Modelling and experimental evaluation of parallel connected lithium ion cells for an electric vehicle battery system

    Science.gov (United States)

    Bruen, Thomas; Marco, James

    2016-04-01

    Variations in cell properties are unavoidable and can be caused by manufacturing tolerances and usage conditions. As a result of this, cells connected in series may have different voltages and states of charge that limit the energy and power capability of the complete battery pack. Methods of removing this energy imbalance have been extensively reported within literature. However, there has been little discussion around the effect that such variation has when cells are connected electrically in parallel. This work aims to explore the impact of connecting cells, with varied properties, in parallel and the issues regarding energy imbalance and battery management that may arise. This has been achieved through analysing experimental data and a validated model. The main results from this study highlight that significant differences in current flow can occur between cells within a parallel stack that will affect how the cells age and the temperature distribution within the battery assembly.

  16. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    Science.gov (United States)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  17. Decentralized combined heat and power production by two-stage biomass gasification and solid oxide fuel cells

    DEFF Research Database (Denmark)

    Bang-Møller, Christian; Rokni, Masoud; Elmegaard, Brian

    2013-01-01

    To investigate options for increasing the electrical efficiency of decentralized combined heat and power (CHP) plants fuelled with biomass compared to conventional technology, this research explored the performance of an alternative plant design based on thermal biomass gasification and solid oxide...... fuel cells (SOFC). Based on experimental data from a demonstrated 0.6 MWth two-stage gasifier, a model of the gasifier plant was developed and calibrated. Similarly, an SOFC model was developed using published experimental data. Simulation of a 3 MWth plant combining two-stage biomass gasification......, carbon conversion factor in the gasifier and the efficiency of the DC/AC inverter were the most influential parameters in the model. Thus, a detailed study of the practical values of these parameters was conducted to determine the performance of the plant with the lowest possible uncertainty. The SOFC...

  18. Train Stop Scheduling in a High-Speed Rail Network by Utilizing a Two-Stage Approach

    Directory of Open Access Journals (Sweden)

    Huiling Fu

    2012-01-01

    Full Text Available Among the most commonly used methods of scheduling train stops are practical experience and various “one-step” optimal models. These methods face problems of direct transferability and computational complexity when considering a large-scale high-speed rail (HSR network such as the one in China. This paper introduces a two-stage approach for train stop scheduling with a goal of efficiently organizing passenger traffic into a rational train stop pattern combination while retaining features of regularity, connectivity, and rapidity (RCR. Based on a three-level station classification definition, a mixed integer programming model and a train operating tactics descriptive model along with the computing algorithm are developed and presented for the two stages. A real-world numerical example is presented using the Chinese HSR network as the setting. The performance of the train stop schedule and the applicability of the proposed approach are evaluated from the perspective of maintaining RCR.

  19. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  20. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  1. Design and Characterization of two stage High-Speed CMOS Operational Amplifier

    Directory of Open Access Journals (Sweden)

    Rahul Chaudhari

    2014-03-01

    Full Text Available A method described in this paper is to design a Two Stage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

  2. Advanced boundary electrode modeling for tES and parallel tES/EEG

    CERN Document Server

    Agsten, Britte; Pursiainen, Sampsa; Wolters, Carsten H

    2016-01-01

    This paper explores advanced electrode modeling in the context of separate and parallel transcranial electrical stimulation (tES) and electroencephalography (EEG) measurements. We focus on boundary condition based approaches that do not necessitate adding auxiliary elements, e.g. sponges, to the computational domain. In particular, we investigate the complete electrode model (CEM) which incorporates a detailed description of the skin-electrode interface including its contact surface, impedance and normal current distribution. The CEM can be applied for both tES and EEG electrodes which is advantageous when a parallel system is used. In comparison to the CEM, we test two important reduced approaches: the gap model (GAP) and the point electrode model (PEM). We aim to find out the differences of these approaches for a realistic numerical setting based on the stimulation of the auditory cortex. The results obtained suggest, among other things, that GAP and GAP/PEM are sufficiently accurate for the practical appli...

  3. A piloted comparison of elastic and rigid blade-element rotor models using parallel processing technology

    Science.gov (United States)

    Hill, Gary; Du Val, Ronald W.; Green, John A.; Huynh, Loc C.

    1990-01-01

    A piloted comparison of rigid and aeroelastic blade-element rotor models was conducted at the Crew Station Research and Development Facility (CSRDF) at Ames Research Center. FLIGHTLAB, a new simulation development and analysis tool, was used to implement these models in real time using parallel processing technology. Pilot comments and quantitative analysis performed both on-line and off-line confirmed that elastic degrees of freedom significantly affect perceived handling qualities. Trim comparisons show improved correlation with flight test data when elastic modes are modeled. The results demonstrate the efficiency with which the mathematical modeling sophistication of existing simulation facilities can be upgraded using parallel processing, and the importance of these upgrades to simulation fidelity.

  4. Two-Stage Robust Security-Constrained Unit Commitment with Optimizable Interval of Uncertain Wind Power Output

    Directory of Open Access Journals (Sweden)

    Dayan Sun

    2017-01-01

    Full Text Available Because wind power spillage is barely considered, the existing robust unit commitment cannot accurately analyze the impacts of wind power accommodation on on/off schedules and spinning reserve requirements of conventional generators and cannot consider the network security limits. In this regard, a novel double-level robust security-constrained unit commitment formulation with optimizable interval of uncertain wind power output is firstly proposed in this paper to obtain allowable interval solutions for wind power generation and provide the optimal schedules for conventional generators to cope with the uncertainty in wind power generation. The proposed double-level model is difficult to be solved because of the invalid dual transform in solution process caused by the coupling relation between the discrete and continuous variables. Therefore, a two-stage iterative solution method based on Benders Decomposition is also presented. The proposed double-level model is transformed into a single-level and two-stage robust interval unit commitment model by eliminating the coupling relation, and then this two-stage model can be solved by Benders Decomposition iteratively. Simulation studies on a modified IEEE 26-generator reliability test system connected to a wind farm are conducted to verify the effectiveness and advantages of the proposed model and solution method.

  5. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  6. On dynamic loads in parallel shaft transmissions. 1: Modelling and analysis

    Science.gov (United States)

    Lin, Edward Hsiang-Hsi; Huston, Ronald L.; Coy, John J.

    1987-01-01

    A model of a simple parallel-shaft, spur-gear transmission is presented. The model is developed to simulate dynamic loads in power transmissions. Factors affecting these loads are identified. Included are shaft stiffness, local compliance due to contact stress, load sharing, and friction. Governing differential equations are developed and a solution procedure is outlined. A parameter study of the solutions is presented in NASA TM-100181 (AVSCOM TM-87-C-3).

  7. The Modelling of Mechanism with Parallel Kinematic Structure in Software Matlab/Simulink

    Directory of Open Access Journals (Sweden)

    Vladimir Bulej

    2016-09-01

    Full Text Available The article deals with the preparation of simulation model of mechanism with parallel kinematic structure called hexapod as an electro-mechanical system in software MATLAB/Simulink. The simulation model is composed from functional blocks represented each part of mechanism’s kinematic structure with certain properties. The results should be used for further simulation of its behaviour as well as for generating of control algorithms for real functional prototype.

  8. Parallel scripting for improved performance and productivity in climate model postprocessing, integration, and analysis.

    Science.gov (United States)

    Wilde, M.; Mickelson, S. A.; Jacob, R. L.; Zamboni, L.; Elliott, J.; Yan, E.

    2012-12-01

    Climate models continually increase both in their resolution and structural complexity, resulting in multi-terabyte model outputs. This volume of data overwhelms the current model processing procedures that are used to derive climate averages, perform analysis, produce visualizations, and integrate climate models with other datasets. We describe here the application of a new programming model - implicitly parallel functional dataflow scripting - for expressing the processing steps needed to post-process, analyze, integrate, and visualize the output of climate models. This programming model, implemented in the Swift parallel scripting language, provides a many-fold speedup of processing while reducing the amount of manual effort involved. It is characterized by: - implicit, pervasive parallelism, enabling scientists to leverage diverse parallel resources with reduced programming complexity; - abstraction of computing location and resource types, and automation of high performance data transport; - compact, uniform representation for the processing protocols and procedures of a research group or community under which virtually all existing software tools and languages can be coordinated; and - tracking of the provenance of derived data objects, providing a means for diagnostic interrogation and assessment of computational results. We report here on four model-analysis and/or data integration applications of this approach: 1) Re-coding of the community-standard diagnostic packages used to post-process data from the Community Atmosphere Model and the Parallel Ocean Program in Swift. This has resulted in valuable speedups in model analysis for these heavily used procedures. 2) Processing of model output from HiRAM, the GFDL global HIgh Resolution Atmospheric Model, automating and parallelizing post-processing steps that have in the past been both manually and computationally intensive. Swift automatically processesed 50 HiRAM realizations comprising over 50TB of model

  9. Parallel Implementation of Dispersive Tsunami Wave Modeling with a Nesting Algorithm for the 2011 Tohoku Tsunami

    Science.gov (United States)

    Baba, Toshitaka; Takahashi, Narumi; Kaneda, Yoshiyuki; Ando, Kazuto; Matsuoka, Daisuke; Kato, Toshihiro

    2015-12-01

    Because of improvements in offshore tsunami observation technology, dispersion phenomena during tsunami propagation have often been observed in recent tsunamis, for example the 2004 Indian Ocean and 2011 Tohoku tsunamis. The dispersive propagation of tsunamis can be simulated by use of the Boussinesq model, but the model demands many computational resources. However, rapid progress has been made in parallel computing technology. In this study, we investigated a parallelized approach for dispersive tsunami wave modeling. Our new parallel software solves the nonlinear Boussinesq dispersive equations in spherical coordinates. A variable nested algorithm was used to increase spatial resolution in the target region. The software can also be used to predict tsunami inundation on land. We used the dispersive tsunami model to simulate the 2011 Tohoku earthquake on the Supercomputer K. Good agreement was apparent between the dispersive wave model results and the tsunami waveforms observed offshore. The finest bathymetric grid interval was 2/9 arcsec (approx. 5 m) along longitude and latitude lines. Use of this grid simulated tsunami soliton fission near the Sendai coast. Incorporating the three-dimensional shape of buildings and structures led to improved modeling of tsunami inundation.

  10. Modeling and characterization of multilayered d 15 mode piezoelectric energy harvesters in series and parallel connections

    Science.gov (United States)

    Zhu, Y. K.; Yu, Y. G.; Li, L.; Jiang, T.; Wang, X. Y.; Zheng, X. J.

    2016-07-01

    A Timoshenko beam model combined with piezoelectric constitutive equations and an electrical model was proposed to describe the energy harvesting performances of multilayered d 15 mode PZT-51 piezoelectric bimorphs in series and parallel connections. The effect of different clamped conditions was considered for non-piezoelectric and piezoelectric layers in the theoretical model. The frequency dependences of output peak voltage and power at different load resistances and excitation voltages were studied theoretically, and the results were verified by finite element modeling (FEM) simulation and experimental measurements. Results show that the theoretical model considering different clamped conditions for non-piezoelectric and piezoelectric layers could make a reliable prediction for the energy harvesting performances of multilayered d 15 mode piezoelectric bimorphs. The multilayered d 15 mode piezoelectric bimorph in a series connection exhibits a higher output peak voltage and power than that of a parallel connection at a load resistance of 1 MΩ. A criterion for choosing a series or parallel connection for a multilayered d 15 mode piezoelectric bimorph is dependent on the comparison of applied load resistance with the critical resistance of about 55 kΩ. The proposed model may provide some useful guidelines for the design and performance optimization of d 15 mode piezoelectric energy harvesters.

  11. A primitive kinetic-fluid model for quasi-parallel propagating magnetohydrodynamic waves

    Energy Technology Data Exchange (ETDEWEB)

    Nariyuki, Y. [Faculty of Human Development, University of Toyama, 3190 Toyama City, Toyama 930-8555 (Japan); Saito, S. [Graduate School of Science, Nagoya University, Nagoya, Aichi 464-8601 (Japan); Umeda, T. [Solar-Terrestrial Environment Laboratory, Nagoya University, Nagoya, Aichi 464-8601 (Japan)

    2013-07-15

    The extension and limitation of the existing one-dimensional kinetic-fluid model (Vlasov-MHD (magnetohydrodynamic) model), which has been used to analyze parametric instabilities of parallel propagating Alfvén waves, are discussed. The inconsistency among the given velocity distribution functions in the past studies is resolved through the systematic derivation of the multi-dimensional Vlasov-MHD model. The linear dispersion analysis of the present model indicates that the collisionless damping of the slow modes is adequately evaluated in low beta plasmas, although the deviation between the present model and the full-Vlasov theory increases with increasing plasma beta and increasing propagation angle. This is because the transit-time damping is not correctly evaluated in the present model. It is also shown that the ponderomotive density fluctuations associated with the envelope-modulated quasi-parallel propagating Alfvén waves derived from the present model is not consistent with those derived from the other models such as the Landau-fluid model, except for low beta plasmas. The result indicates the present model would be useful to understand the linear and nonlinear development of the Alfvénic turbulence in the inner heliosphere, whose condition is relatively low beta, while the existing model and the present model are insufficient to discuss the parametric instabilities of Alfvén waves in high beta plasmas and the obliquely propagating waves.

  12. Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model.

    Science.gov (United States)

    Liu, Yang; Jing, Weizhe; Xu, Lixiong

    2016-01-01

    Artificial Neural Network (ANN) is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN) has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.

  13. Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2016-01-01

    Full Text Available Artificial Neural Network (ANN is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.

  14. Defmod - Parallel multiphysics finite element code for modeling crustal deformation during the earthquake/rifting cycle

    CERN Document Server

    Ali, S Tabrez

    2014-01-01

    In this article, we present Defmod, a fully unstructured, two or three dimensional, parallel finite element code for modeling crustal deformation over time scales ranging from milliseconds to thousands of years. Defmod can simulate deformation due to all major processes that make up the earthquake/rifting cycle, in non-homogeneous media. Specifically, it can be used to model deformation due to dynamic and quasistatic processes such as co-seismic slip or dike intrusion(s), poroelastic rebound due to fluid flow and post-seismic or post-rifting viscoelastic relaxation. It can also be used to model deformation due to processes such as post-glacial rebound, hydrological (un)loading, injection and/or withdrawal of compressible or incompressible fluids from subsurface reservoirs etc. Defmod is written in Fortran 95 and uses PETSc's parallel sparse data structures and implicit solvers. Problems can be solved using (stabilized) linear triangular, quadrilateral, tetrahedral or hexahedral elements on shared or distribut...

  15. Dynamic Modelling and Trajectory Tracking of Parallel Manipulator with Flexible Link

    Directory of Open Access Journals (Sweden)

    Chen Zhengsheng

    2013-09-01

    Full Text Available This paper mainly focuses on dynamic modelling and real‐time control for a parallel manipulator with flexible link. The Lagrange principle and assumed modes method (AMM substructure technique is presented to formulate the dynamic modelling of a two‐degrees‐of‐freedom (DOF parallel manipulator with flexible links. Then, the singular perturbation technique (SPT is used to decompose the nonlinear dynamic system into slow time‐scale and fast time‐scale subsystems. Furthermore, the SPT is employed to transform the differential algebraic equations (DAEs for kinematic constraints into explicit ordinary differential equations (ODEs, which makes real‐time control possible. In addition, a novel composite control scheme is presented; the computed torque control is applied for a slow subsystem and the H technique for the fast subsystem, taking account of the model uncertainty and outside disturbance. The simulation results show the composite control can effectively achieve fast and accurate tracking control.

  16. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  17. Simulation of levulinic acid adsorption in packed beds using parallel pore/surface diffusion model

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, L.; Mao, J. [Zhejiang Provincial Key Laboratory for Chemical and Biological Processing Technology of Farm Products, Zhejiang University of Science and Technology, Hangzhou (China); Ren, Q. [National Laboratory of Secondary Resources Chemical Engineering, Zhejiang University, Hangzhou (China); Liu, B.

    2010-07-15

    The adsorption of levulinic acid in fixed beds of basic polymeric adsorbents at 22 C was studied under various operating conditions. A general rate model which considers pore diffusion and parallel pore/surface diffusion was solved numerically by orthogonal collocation on finite elements to describe the experimental breakthrough data. The adsorption isotherms, and the pore and surface diffusion coefficients were determined independently in batch adsorption studies. The external film resistance and the axial dispersion coefficient were estimated by the Wilson-Geankoplis equation and the Chung-Wen equation, respectively. Simulation elucidated that the model which considers parallel diffusion successfully describes the breakthrough behavior and gave a much better prediction than the model which considers pore diffusion. The results obtained in this work are applicable to design and optimizes the separation process. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  18. Stellar Structure Modeling using a Parallel Genetic Algorithm for Objective Global Optimization

    CERN Document Server

    Metcalfe, T S

    2002-01-01

    Genetic algorithms are a class of heuristic search techniques that apply basic evolutionary operators in a computational setting. We have designed a fully parallel and distributed hardware/software implementation of the generalized optimization subroutine PIKAIA, which utilizes a genetic algorithm to provide an objective determination of the globally optimal parameters for a given model against an observational data set. We have used this modeling tool in the context of white dwarf asteroseismology, i.e., the art and science of extracting physical and structural information about these stars from observations of their oscillation frequencies. The efficient, parallel exploration of parameter-space made possible by genetic-algorithm-based numerical optimization led us to a number of interesting physical results: (1) resolution of a hitherto puzzling discrepancy between stellar evolution models and prior asteroseismic inferences of the surface helium layer mass for a DBV white dwarf; (2) precise determination of...

  19. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Institute of Scientific and Technical Information of China (English)

    Yong Wu; Arun Kumar

    2012-01-01

    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  20. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.